Efforts to prevent harm on the internet through regulation and legislation have struggled because they've targeted content rather than features.
Legislative action up to this point has largely been unworkable because it centers the particulars of speech and speakers. Approaches like
In contrast, government action that constrains the set of features that platforms can offer is likely to be effective, quick to implement, and dramatically reduce harm. This site houses three flavors of interventions that are good candidates for legislation:
Laws can require platforms to implement specific features, which would help users reduce the harm that platforms may cause them. This category is promising because it puts more power in users' hands to control how they engage with the platform, a palatable option for both libertarian and interventionist types. The two that I am most hopeful for in this category are:
Having access to data is critical for
Laws and regulations can play a role here in requiring that platforms collect the relevant data, and requiring that they offer it in ways that are helpful to these three groups. While most responsible platforms will do versions of these already, rogue platforms will build their systems to obscure their usage.
With this context, features like Media Provenance and Save Extensive Metadata are promising, since they would require platforms to collect and offer data that can be used to better understand, navigate, and regulate harm.
At one end of a spectrum of potential interventions, lawmakers could implement industry wide bans for the narrow, specific practices most likely to cause harm. While this is a longshot, it's one that deserves real consideration, since most of these features are (historically) novel, and taking action against them only gets harder as they get more use. Some examples of this strategy:
If you're a lawmaker or staffer, and you'd like to talk about any of these ideas (or any of yours), please reach out ! I'm happy to help in any way I can.