Intervention:

Reporting Mechanisms

Definition: Allow users to flag content or behavior that they find to be abusive.
Kind of Intervention:
User Control
Reversible:
Easily Tested + Abandoned
Suitability:
General
Technical Difficulty:
Hard
Legislative Target:
Yes

Reporting is the historical bedrock of counter-abuse protections on platforms. A Platform's users will almost always know something is wrong before the platform itself has the capacity or time to notice. The deputization of users as reporters in the content moderation process simultaneously empowers users to see themselves as co-creators of the safety of the online spaces that they enhabit, while helping integrity teams optimize how they direct their time and attention toward the issues that users are feeling most acutely.

However useful reporting is at generating visibility, it is part of a solution, not a full one. Firstly, when backed by human review, the costs of report evaluation, response, and triage grow in proportion with the volume of visible abuse on the platform, so when platforms are inundated with novel forms or strategies of abuse, it is common for reporting functions to become overrun. Moreover, though reporting is useful at getting visibility into acute and legible forms of abuse, it often fails to shine a light on issues that aren't obvious to the user that is harmed by them, like malware, which is hard for the user to detect, or hate speech, which often occurs out of view from the parties harmed by it. Additionally, reporting systems have to be linked to careful review prior to action being taken on content, otherwise they can be trivially used for capricious or ideological attempts to take down the content of opponents. Finally, reporting is always a slow and reactive process, and while it can provide a sense of solvency and action for the reporting user, in reality it doesn't prevent the majority of harm that the content causes.

Because of their ubiquity, many sets of industry best practices have emerged that give excellent advice for how to build and reform reporting practices. Though only part of a broader trust and safety strategy, when reporting systems are built well, they have the capacity to encourage users to feel empowered, safe, and heard, while giving platforms clear visibility into what their users are currently being frustrated by.

Is something missing, or could it be better?
Loading...