Harm:

Self-Harm Amplification

Definition: The exacerbation and normalization of self-destructive behavior.
Motivation:
Personal
Legal Status:
Rarely criminalized
Platform ToS:
Violates Policy
Victim Visibility:
Unaware
Classification:
Contextually Sensitive
TSPA Abuse Type:
User Safety: Suicide and Self Harm

Self-harm Amplification (SHA) is the phenomenon where platforms inadvertently contribute to the escalation and reinforcement of self-harming behaviors among their users. Self-harm includes eating disorders, self-inflicted wounds, and suicidal ideation, and is most pervasive in adolescents. The amplification of self-harm ideation often occurs when individuals seeking information or support around their self-harming tendencies encounter content or communities that glorify or romanticize the behaviors, with the effect of normalizing and encouraging them. This reduces the user's capacity to cope with and overcome the behaviors.

Though their participation in this pattern is inadvertent, platforms bear some responsibility for their amplification of these tendencies. Firstly, content recommendation algorithms prioritize engagement and user interaction, which has the general effect of amplifying and radicalizing the beliefs and ideas which are already familiar to a user. In the context of self harm, this dynamic can lead to the promotion of content on self-harm subjects that aligns to the (notedly unhealthy) leanings of the viewer. Moreover, the sense of anonymity and the availability of online communities can create an environment where individuals feel comfortable to share and seek validation for their self-harming behaviors, and the anonymity which enables that sense of comfort also enables dangerous ideas around self-harm to circulate. 

While platforms have attempted to address self-harm amplification through traditional approaches, they typically struggle to do so. Content moderation systems that focus on content analysis struggle to differentiate content that offers genuine support or educational resources from content that amplifies harmful practices. Additionally, platforms walk a fine line between addressing harmful content and infringing upon users' sense of safety and comfort to share their challenging personal experiences or seek support.

What features facilitate Self-Harm Amplification?

Recommendation
A platform proactively inserting content into a user's view.

How can platform design prevent Self-Harm Amplification?

Allowing users more control over their time on platform can help users escape their rabbit holes.
Self-imposed time limits
Allow users to set hard cutoffs on platform use.
Omit comment reaction volume
Don't prominently display the number of likes or other forms of feedback a comment gets.
Mix in Authoritative Content
Add authoritative content from trusted partners on users' subscribed topics.
Limit Time on Platform
To minimize the potential harms of a platform, enforce global time limits on engagement.
Is something missing, or could it be better?
Loading...