Harm:

Incentivizing Dangerous Behavior

Definition: Creating dynamics in which dangerous activity is glorified and implicitly encouraged.
Motivation:
Financial
Legal Status:
Rarely criminalized
Platform ToS:
Allowed by Policy
Victim Visibility:
Unaware
Classification:
Contextually Sensitive
TSPA Abuse Type:
User Safety: Dangerous Misinformation and Endangerment

One of the unfortunate elements of how the internet is configured today is that it is optimized in many instances for shock, and that often inadvertently optimizes for content that is offensive, outlandish, or expressly made at someone else's expense. 

Dangerous Pranks, drinking livestreams, the infamous tide pod challenge, all are instances of a media ecosystem that prizes traffic, and is ambivalent of the mechanism that influencers use to gain it. Additionally, online environments can easily foster cultures where the sensation of the events unfolding eclipses and abstracts away from the people generating the content, and users push for the absurd, dangerous outcomes because the distance of the internet allows them to dissociate the characters on the screen from the real people that get hurt.

 “People knew what the outcome of buying [them] the strongest shot is, but they still did it, because they wanted to see a tragedy,” she added. “It was a whole audience of pushers.” - HuffPo, Deadly Drinking on TikTok

Just search "deadly prank" in the news to see story after story of people creating challenges or pranks as jokes, often using staged actors or deceptive editing, and then see others follow them blindly, harming themselves or others in the process.

While individual creators are certainly the originators of almost all of this content, it's critical to recognize the role that platforms play in incentivizing the creation of outrageous material, which by subset encourages the growth and creation of these explicitly dangerous sub-genres. A platform that optimizes for attention and engagement is unlikely to be able to ever fully grapple with wave after wave of "tide-pod challenge", because the patterns of attention, danger, and outrage are all deeply interwoven, and evoke similar responses.

What features facilitate Incentivizing Dangerous Behavior?

Recommendation
A platform proactively inserting content into a user's view.

How can platform design prevent Incentivizing Dangerous Behavior?

Ban Proactive Content Recommendation
Prohibit infinite feeds for children, and provide a universal opt-out for adults.
Hide Interaction Counts
Foster authentic interaction by making numerical properties less central.
Because the incentives for dangerous behavior arise out of attentional pressures:
Limited Number of Subscriptions
By limiting the number of subscribers (or the number of subscriptions), a platform can design toward real-world connections, and away from exponential scale distributions.
Flatten Virality Curves
Cap the attention a user can receive by a multiple of their prior reach.
Limit Content Reach
Put upper bounds on how many people can view/share/interact with content.
Incentives for dangerous behavior arise largely out of influencer dynamics, so decentralizing influence would reduce the prevalence of this problem.
All Subscriptions Reciprocal
Require "following" to be bidirectional to avoid an exponential distribution of reach and attention.
Is something missing, or could it be better?
Loading...