Harm:

Online Shaming

Definition: Using the internet to publicly shame or ridicule.
Motivation:
Personal
Legal Status:
Rarely criminalized
Platform ToS:
Allowed by Policy
Victim Visibility:
Aware
Classification:
Contextually Sensitive
TSPA Abuse Type:
User Safety: Harassment and Bullying

The same tendency that has led humans to form mobs throughout history manifests in digital spaces to mete out a retributive and imprecise justice.

A trend, not a boon or abuse

Though this tendency is dangerous, it is important to first recognize that in giving voice to the masses, the internet has enabled the dis-empowered to challenge the powerful, and in doing so, often achieves more just outcomes than were previously possible. The #MeToo movement was only possible because of the space and momentum that social media provided. Killings of unarmed black folks gained public attention and outrage through informal social channels first. By giving voice to the marginalized, and the acceleration of narrative that virality mechanisms produce, social media mobs have been able to meaningfully move the needle toward justice on many occasions.

However, just like fire can heat or destroy, the social dynamics that outrage-optimized algorithms engender aren't a normative good or bad, they can be equally aimed at the misunderstood and the malevolent, and the discernment (or the truth) between the two is not always clear. For this reason, many platforms have a desire to limit the degree to which their platform can be the organizing site of brigades - a behavior that can reasonably be interpreted either as a consolidation of power, or a thoughtful abdication of it.

The Question is Amplification

A common refrain from platforms on the question of how to handle issues like this one is the "we are not responsible for the speech or behavior of users". While that may be true, platforms role in the formation of online mobs isn't by putting words in the mouths of users, but rather putting content in front of their eyes.

Internet platforms can provide services that aid the formation of mobs through features that optimize for attention, and by human-nature-proxy, outrage. The capacity for social mobs to form online is constrained by the capacity for information that incites anger to spread.

Because of this, limitations on how quickly messages can be amplified, or how quickly a person's profile can rise, will reduce a platforms usefulness as a mechanism of bulk retribution, and in doing so, sets a clearer community precedent about the types of interactions that are expected and designed for in an online space.

What features facilitate Online Shaming?

Search
Locating and ranking content to be responsive to a user's query.
Identity
Individuals' ability to represent themselves in a digital space.

How can platform design prevent Online Shaming?

Limit Content Reach
Put upper bounds on how many people can view/share/interact with content.
Feedback Only From Authoritative Sources
Allow users with geographic proximity, purchase history, or other signals of engagement to leave ratings.
Hide Interaction Counts
Foster authentic interaction by making numerical properties less central.
Comment Ordering
Set the tone for the conversation by choosing the order in which voices are heard.
Flatten Virality Curves
Cap the attention a user can receive by a multiple of their prior reach.
Comment Tone Check Popup
Before posting content containing vulgarities, prompt a user to think twice.
Temporal Comment Limits
Restricting the volume of comments people can post will make folks think twice about their actions.
Three Insult Rule
Rather than looking at whether individual pieces of content constitute harassment, consider patterns of behavior.
Is something missing, or could it be better?
Loading...