Harassment / Cyberbullying

Definition: Unwanted and repeated behavior that threatens, annoys, or intimidates a victim.
Legal Status:
Rarely criminalized
Platform ToS:
Violates Policy
Victim Visibility:
Contextually Sensitive
TSPA Abuse Type:
User Safety: Harassment and Bullying

Like any tool of communication and collaboration, digital platforms tend to facilitate all forms of harassment: the act or pattern of threatening, intimidating, or humiliating someone else - sometimes called Cyberbullying when the parties are children. Platforms are uniquely susceptible to harassment for a few reasons:

  • Users tend to say and do things on platforms that they would not say or do in real life
  • Platforms enable greater reach for a small number of bad actors to harass a large number of victims
  • Platforms often incentivize engagement, and disrespectful/abusive content tends to garner attention.

While the motivations for harassment are various, and the degree of intentionality varries, harassment in digital spaces can be helpfully divided into two varieties:

  1. Individual Harassment - this is the inherent possibility that any medium for communication can be used by one user to annoy, demean, or intimidate another, if a user is motivated enough. This is hard to prevent, because the power of platforms largely derives from their capacity to enable users to connect and communicate, and those core features are the same ones that can be used to harass.
  2. Structurally Supported Harassment - digital spaces have norms just like physical spaces: libraries have different norms around yelling than concerts do. When the structure of digital spaces are insular, informal, or ephemeral, harassment is more frequent, and can become normalized in the space. Many online gaming communities stumble into these unfortunate patterns.

While individual harassment may not be possible to fully prevent, there are a wide slate of well established design techniques that try to give users the capacity to block out other users, and limit their capacity to see and interact with them. These tools can be effective if they are readily accessible, and can be proactively suggested when harassing content or behavior is detected. However, since these tools typically operate on the 1:1 connection between users, they can be circumvented by motivated actors who have the capacity to create multiple accounts. Though these techniques can't prevent harm, they can help users feel more resilient to it, and through taking action against specific users, help them to regain a sense of safety and control over their experience on a platform.

While individual harassment can be expected and anticipated on any platform, when structurally supported harassment occurs, it's typically a signal that something is wrong with the platform's design.  Platforms that see emerging signs of structurally supported harassment should ask themselves some hard questions about their business model and the design of their systems:

  • Why do harassers feel comfortable in this space?
  • Are there imbalances in demographics that exacerbate this problem?
  • Are users intimidated by the repercussions for harassment?
  • How can we set a healthier tone for conversations?

What features facilitate Harassment / Cyberbullying?

Individuals' ability to represent themselves in a digital space.
Responses to primary content, or other comments.
Enable users to exchange text in real time.

How can platform design prevent Harassment / Cyberbullying?

Author Comment-Moderation
Enable primary content creators control over the comments layered on their content.
Affinity To Comment
Require a history of interaction between users before they're allowed to interact in comments.
Must Request to Message
Only allow friction-less initiation of a conversation between established connections.
Three Insult Rule
Rather than looking at whether individual pieces of content constitute harassment, consider patterns of behavior.
Temporal Comment Limits
Restricting the volume of comments people can post will make folks think twice about their actions.
Omit comment reaction volume
Don't prominently display the number of likes or other forms of feedback a comment gets.
Comment Tone Check Popup
Before posting content containing vulgarities, prompt a user to think twice.
Because it provides users with a sense of solvency/control:
Reporting Mechanisms
Allow users to flag content or behavior that they find to be abusive.
No Comments for Fresh Accounts
Don't accept comments from newly created or barely used accounts.
Don't allow posting of Location
Use content filters to prevent users from posting addresses, latitude/longitude coordinates, or other location data.
No Search by Location
When abusers can discover content and accounts by location, the creators of content are in danger.
Is something missing, or could it be better?