Executive Summary For Lawmakers

Efforts to prevent harm on the internet through regulation and legislation have struggled because they've targeted content rather than features.

Legislative action up to this point has largely been unworkable because it centers the particulars of speech and speakers. Approaches like

regulating content
, requiring
platform-led disclosure
, demanding
age restrictions
, or changing
liability regimes
- have each proven ineffective at preventing harm (click for details).

In contrast, government action that constrains the set of features that platforms can offer is likely to be effective, quick to implement, and dramatically reduce harm. This site houses three flavors of interventions that are good candidates for legislation:

Require Options for User Control

Laws can require platforms to implement specific features, which would help users reduce the harm that platforms may cause them. This category is promising because it puts more power in users' hands to control how they engage with the platform, a palatable option for both libertarian and interventionist types. The two that I am most hopeful for in this category are:

  • Self-imposed time limits - Platforms should be compelled to offer users tools to control their time on platform in ways that cannot be overriden. This should be implemented for everyone, but particularly for children, as part of robust packages of parental controls.
  • Periodic Preference Reset - Platforms should allow users to reset the collection of data used to recommend them content, without removing other content, like their messages, or content they have created.

Require collection of and access to data

Having access to data is critical for

  • Law Enforcement to tackle illegal activity on platforms
  • Researchers seeking to understand platform dynamics
  • Users seeking to understand their usage and media environment

Laws and regulations can play a role here in requiring that platforms collect the relevant data, and requiring that they offer it in ways that are helpful to these three groups. While most responsible platforms will do versions of these already, rogue platforms will build their systems to obscure their usage.

With this context, features like Media Provenance and Save Extensive Metadata are promising, since they would require platforms to collect and offer data that can be used to better understand, navigate, and regulate harm.

Intervene to Prevent the Most Harmful Practices

At one end of a spectrum of potential interventions, lawmakers could implement industry wide bans for the narrow, specific practices most likely to cause harm. While this is a longshot, it's one that deserves real consideration, since most of these features are (historically) novel, and taking action against them only gets harder as they get more use. Some examples of this strategy:

Need a Hand?

If you're a lawmaker or staffer, and you'd like to talk about any of these ideas (or any of yours), please reach out ! I'm happy to help in any way I can.