Harm:

Filter Bubbles

Definition: Algorithmically driven content selection that prioritizes content that aligns with the user's prior ideas.
Motivation:
Financial
Legal Status:
Rarely criminalized
Platform ToS:
Allowed by Policy
Victim Visibility:
Unaware
Classification:
Contextually Sensitive
TSPA Abuse Type:
None

How do Filter Bubbles get Created?

Today, many platforms build personalized products, that aim to tailor themselves to capture the most user attention, engagement, and value. This typically entails using algorithms to curate content based on a user's preferences, behaviors, and previous interactions, often with the use of sophisticated algorithms, including Machine Learning (AI). As these algorithms are fed more data about a given user's behavior, they become better and better at predicting what that user will find the most engaging.

However, humans typically find content that aligns with their existing preferences and beliefs the most compelling. When describing this phenomenon outside of the context of algorithmic amplification, we call it an Echo Chamber.

When an algorithm tries to guess what users are going to like, they often (unintentionally) find and maximally stimulate this bias, promoting content that aligns with and reinforces the user's existing preferences and beliefs. This results in users only seeing a narrow slice of content on a platform - the content that matches their world view - and leaves them oblivious to the vast swathes of content that don't mirror their ideas. This phenomenon is called a Filter Bubble.

Why should we care about Filter Bubbles?

While the immediate harms of Filter Bubbles might appear limited, their proliferation has seriously dangerous effects on societal health. By lessening every individual's exposure to diverse perspectives, Filter Bubbles erode our capacity to relate to one another, leading to political and ethnic polarization, the spread of misinformation and conspiracy theories, and, at their most extreme, the disintegration of a shared sense of reality.

How will Filter Bubbles change in the next Decade?

If you think Filter Bubbles were a problem of the 2010's, brace yourself for the 2030s- without action, this problem is about to get dramatically worse.

  • The overall volume of content is expected to explode in the next decade as a result of generative AI technologies becoming more widely available and more enmeshed in creative tooling. This will make it possible for users to see more and more narrowly tailored sections of the internet.
  • The capacity for LLM-based technologies to help content creators automate the production of highly personalized, persuasive, and enthralling content is also expected to rise dramatically.
  • The predictive capacity of recommendation systems is likely to continue to get better (due to AI/ML based improvements in content analysis, and more advanced predictive architectures)

If we do nothing, the problem of Filter Bubbles is going to get much worse than it is today.

Approaches for Combating the Problem

Approaches for Combating the Problem

It's critical to realize: Filter Bubbles arise not out of the thoughtful malice of platforms, nor out of the ignorance of individuals. Rather, Filter Bubbles occur because of the economic incentive for platforms to maximize the amount of attention they can get from users, paired with the cognitive bias toward information that we're comfortable with. We have little option to change how the brain works, but we do have the ability to intervene on (and intentionally disincentivize) platforms from leveraging business models based around maximal engagement, and set hard limitations on the capacity for platforms to capture our attention.

What features facilitate Filter Bubbles?

Recommendation
A platform proactively inserting content into a user's view.

How can platform design prevent Filter Bubbles?

Ban Proactive Content Recommendation
Prohibit infinite feeds for children, and provide a universal opt-out for adults.
Omit comment reaction volume
Don't prominently display the number of likes or other forms of feedback a comment gets.
Periodic Preference Reset
Regularly reset the profile used to generate recommendations for a user.
Mix in Authoritative Content
Add authoritative content from trusted partners on users' subscribed topics.
Hide Interaction Counts
Foster authentic interaction by making numerical properties less central.
Comment Ordering
Set the tone for the conversation by choosing the order in which voices are heard.
Is something missing, or could it be better?
Loading...