The internet has opened up a vast landscape of potential online harms. Often, these harms are a direct translation of real-world abuse into digital spaces, revealing how deeply rooted societal ills can permeate even novel spaces. In other cases, the internet poses unique, digital-native harms, underscoring the profound ways the internet's connectivity and ubiquity can reshape our world, and sometimes do so for the worse.
The public discussion around online harms has focused on Content Moderation. This approach is time consuming, contentious, politically-fraught, and requires constant oversight, rule building, and attention. While it might seem like the right tool to reach for when content generates harm on the internet, it's helpful to back up and ask: what about the platform makes harmful content prevalent?
While the digital platforms themselves are not often directly to blame for these abuses, their design, algorithms, and policies often facilitate or exacerbate harm. By digging into why platforms are potential conduits for harm, this project explores how platforms could be tweaked by their owners, regulated by their governments, or petitioned by their users for redesigns that center harm aversion.
To get started, pick one of the harms below that you're most concerned with.