Harm:

Political Misinformation

Definition: Spreading false or misleading information to influence political attitudes and outcomes.
Motivation:
Ideological
Legal Status:
Rarely criminalized
Platform ToS:
Allowed by Policy
Victim Visibility:
Unaware
Classification:
Contextually Sensitive
TSPA Abuse Type:
None

Political misinformation on online platforms requires much more nuance than simply identifying and taking down false content. Unlike election misinformation which often encompasses easily verifiable claims—like erroneous election dates or death hoaxes—political misinformation is a broad spectrum. It ranges from content that can be misleading yet technically true to outright transparent and malicious fabrications. When considering the role of online platforms in mitigating this issue, the focus should be on the platform's design rather than on their efforts to define, scan, and label content.

The origin of political misinformation can easily be malicious, motivated, or unintentional, and the bar of violation is ever shifting - making analysis through policy or intent nearly useless.

However, the design of a platform significantly influences the propagation of political misinformation, and since this is clearly within the platform's control, it is a helpful lens to apply to the problem:

  • Firstly, there's the matter of how the platform contributes to potentially misleading content: does it actively promote or recommend it, or does it simply allow misinformation to spread organically? An algorithm that elevates sensational or divisive content inadvertently becomes a delivery mechanism for misinformation.
  • Secondly, the cultural incentives inherent to a platform can either curtail or exacerbate the problem. For instance, a platform culture that values 'dunking' or 'owning' opponents might incentivize users to prioritize witty, pithy content over fact-based discourse. This environment can further blur the lines between legitimate critique and misleading information.
  • On a positive note, platform features that guide users towards discerning original sources, or provenance, of information and offering related articles can act as structural dampeners of misinformation. By providing context and a broader perspective, users can be empowered to more critically evaluate content before sharing or acting upon it.

Addressing political misinformation on online platforms requires a paradigm shift in both platform attitudes and cultural ones. Platforms need to step back and inspect from a structural viewpoint, questioning why their environment is susceptible to the proliferation of misinformation in the first place. Culturally, we as users of an open internet also need to step back and examine how norms like 'dunking', and 'reshare-before-read' helps this issue proliferate. Prioritizing this broader approach—focusing on creating an environment less conducive to misinformation—will invariably be more effective than merely drawing lines between honest mistakes and deliberate falsehoods.

What features facilitate Political Misinformation?

Recommendation
A platform proactively inserting content into a user's view.
Subscriptions
Allows a user to receive by request another channel's novel content.

How can platform design prevent Political Misinformation?

Flatten Virality Curves
Cap the attention a user can receive by a multiple of their prior reach.
Ban Proactive Content Recommendation
Prohibit infinite feeds for children, and provide a universal opt-out for adults.
Read Before Re-share
Prevent gutteral reshares by adding a waiting period estimated by the length of content.
Crowdsourced Annotations
Allow users to add context to the posts of others when it is widely seen to be useful.
Media Provenance
Record and display the chain of custody and original source for media.
Mix in Authoritative Content
Add authoritative content from trusted partners on users' subscribed topics.
Auto-Generated Content Quiz
Encourage users to dig deeper than headlines by validating their knowledge of the subject.
Require Labels on AI Created Content
Enact legislation for the mandatory prominent disclosure of AI generation.
Recommend Only Verified Users
Require identity verification before adding content from an account to a recommendation engine.
Is something missing, or could it be better?
Loading...