Prevention is often more effective than cure, a principle that when applied to the trust and safety ecosystem brings us the concept of "feature omission." This approach involves intentionally not building (or intentionally removing) features that are likely to amplify or facilitate harmful behavior. Platforms that take this approach can significantly reduce the 'surface area' for potential abuse, thereby restricting the ways in which their services can be used to generate harm. This is particularly important when platforms are honest about their own limitations in understanding all the possible ways that their features are used, and can be misused.
While feature omission might sound like a huge change, it can be used narrowly in combination with other approaches. Take the challenge of malware. One approach to feature omission would be to prevent users from sharing files with one another, which would certainly prevent malware from becoming a problem! Another approach would be to only allow attachments from users who have messaged back and forth at least three times (an Affinity filter). That would only curtail a small percentage of legitimate usage, while still dramatically reducing the spread of malware on the platform. With the lens of feature omission on, each feature can be divided into a larger number of smaller sub-features. As this process is repeated, the abuse vector tends to cluster into only one or two of the narrowly construed features, and which can then be disabled or discouraged.
In the race for market share, many platforms feel a constant pressure to achieve feature parity with their competitors. However, this race can lead to over-extension, where platforms adopt more features than they can effectively monitor, or even understand user usage. Every new feature can come with unforeseen vulnerabilities or avenues for misuse, particularly if it's rushed to market without adequate understanding or safeguards. The result is often a reactive scramble to address integrity or system-health harms after they've already a problem, rather than a proactive strategy to prevent those harms from occurring.
A good lens to look at most content moderation efforts through is considering the role that the platform plays in the perpetuation of harm. When the platform offers more robust and powerful functionality, a bad actor can exact more harm for their efforts than they would be able to with a more limited set of functionality. A platform's contribution to harm can be thought of through this lens: how does the platform amplify or enable harm.
With that in mind, the most effective way for a platform to prevent harm is to reduce the potential role it can play in propagating it.
For instance, platforms that proactively recommend content to users inherently bear a greater responsibility for ensuring that content is safe and non-abusive, since the role they play in the causal chain that leads to harm is of primary consequence. In contrast, subscription aggregation services, which rely on user selection rather than recommendations, have a narrower scope of responsibility.
By deliberately choosing not to implement certain features, platforms can lighten their moderation burden, focusing their efforts on areas they understand well while also diminishing the potential for harm. This strategy of thoughtful feature omission underscores that, sometimes, offering less can indeed provide users with more safety and security.