To minimize the impact of harms like Online Shaming Addiction Trolling Incentivizing Dangerous Behavior, change the design of Recommendation Search.
Intervention:

Limit Content Reach

Definition: Put upper bounds on how many people can view/share/interact with content.
Reversible:
Challenging to Rollout
Suitability:
General
Technical Difficulty:
Straightforward

We often take for granted that the internet has viral dynamics built into it, but that's just one design choice. Rather than actively amplifying content, platforms could create caps on engagement, and optimize for engagement up to that cap, but not beyond it. In a proactive content recommendation feed, this might mean that after recommending content to 10,000 people, the platform wouldn't recommend it to anyone else. 

Why might a platform take this type of thing on? Since virality is a mechanism the platform uniquely enables, it's role in the causal chain that can lead to harm is fundmentally different in the case of harms that happen through virality when compared with harms that happen through other avenues (consider digital means replacing physical means in blackmail) - in virality harms the contribution of the platform to the harm is larger, and it is in a more unique role to mitigate it. If virality isn't a core goal of the platform, platforms can (and should) consider building in mechanisms to prevent virality from occurring on the platform, since the dynamics that virality introduces are expressly beyond their ambitions, and are likey things they're not equipped to (or thinking about how to) handle well.

Within this context, the idea of placing limitations on the reach content can achieve through recommendation systems makes sense, but it requires the platforms to have a clear-eyed view of their capacity to cause harm, their responsibility to mitigate it, and the ethical imperative that connects the two. That probably won't happen without strong public pressure, legislation, or both.

Is something missing, or could it be better?
Loading...