To minimize the impact of harms like Manipulated Media / Deep Fakes Political Misinformation, change the design of Search.
Intervention:

Crowdsourced Annotations

Definition: Allow users to add context to the posts of others when it is widely seen to be useful.
Kind of Intervention:
User Control
Reversible:
Easily Tested + Abandoned
Suitability:
General
Technical Difficulty:
Challenging
Legislative Target:
Yes

While in some cases "more speech" can answer inaccurate or harmful speech, by default, the structure of information display on a platform doesn't put "counter speech" in direct conversation with the speech that it aims to address.

Some attempts to change this have been made in the realm of crowdsourced annotations - pieces of context that users can attach to primary content, which will appear in the view of any users that see the primary content. This is typically gated behind some kind of twin mechanism of aggregated feedback, only allowing the context to be attached if there is broad consensus about its utility.

These systems have the potential to be partial antidotes to misinformation on platforms, though they can't solve the thornier varieties of the problem. They're reactive, they can be abused, they take effort for the contributor, and they outsource the determination of accuracy to the consumers of the content, who are unlikely to be objective neutral jurors. Despite these flaws, systems like this are starting to gain traction, and where they work, they offer the best of all worlds: an approach that is content neutral, that scales, and that helps users better sort out truth from fiction.

Is something missing, or could it be better?
Loading...