Definition: Less is more - a better platform can be achieved by removing pieces of it.

Prevention is often more effective than cure, a principle that when applied to the trust and safety ecosystem brings us the concept of "feature omission." This approach involves intentionally not building (or intentionally removing) features that are likely to amplify or facilitate harmful behavior. Platforms that take this approach can significantly reduce the 'surface area' for potential abuse, thereby restricting the ways in which their services can be used to generate harm. This is particularly important when platforms are honest about their own limitations in understanding all the possible ways that their features are used, and can be misused.

While feature omission might sound like a huge change, it can be used narrowly in combination with other approaches.  Take the challenge of malware. One approach to feature omission would be to prevent users from sharing files with one another, which would certainly prevent malware from becoming a problem! Another approach would be to only allow attachments from users who have messaged back and forth at least three times (an Affinity filter). That would only curtail a small percentage of legitimate usage, while still dramatically reducing the spread of malware on the platform. With the lens of feature omission on, each feature can be divided into a larger number of smaller sub-features. As this process is repeated, the abuse vector tends to cluster into only one or two of the narrowly construed features, and which can then be disabled or discouraged.

Why is this a good strategy?

In the race for market share, many platforms feel a constant pressure to achieve feature parity with their competitors. However, this race can lead to over-extension, where platforms adopt more features than they can effectively monitor, or even understand user usage. Every new feature can come with unforeseen vulnerabilities or avenues for misuse, particularly if it's rushed to market without adequate understanding or safeguards. The result is often a reactive scramble to address integrity or system-health harms after they've already a problem, rather than a proactive strategy to prevent those harms from occurring.

Why Omit?

A good lens to look at most content moderation efforts through is considering the role that the platform plays in the perpetuation of harm. When the platform offers more robust and powerful functionality, a bad actor can exact more harm for their efforts than they would be able to with a more limited set of functionality. A platform's contribution to harm can be thought of through this lens: how does the platform amplify or enable harm.

With that in mind, the most effective way for a platform to prevent harm is to reduce the potential role it can play in propagating it.

For instance, platforms that proactively recommend content to users inherently bear a greater responsibility for ensuring that content is safe and non-abusive, since the role they play in the causal chain that leads to harm is of primary consequence. In contrast, subscription aggregation services, which rely on user selection rather than recommendations, have a narrower scope of responsibility.

By deliberately choosing not to implement certain features, platforms can lighten their moderation burden, focusing their efforts on areas they understand well while also diminishing the potential for harm. This strategy of thoughtful feature omission underscores that, sometimes, offering less can indeed provide users with more safety and security.

Interventions using this Approach

Omit comment reaction volume
Don't prominently display the number of likes or other forms of feedback a comment gets.
API Surface Area
Prevent automated access to features that are intended only for human activity.
Hide Interaction Counts
Foster authentic interaction by making numerical properties less central.
All Subscriptions Reciprocal
Require "following" to be bidirectional to avoid an exponential distribution of reach and attention.
No E2E Encryption
When content is end-to-end encrypted, platforms can offer no content-based protections.
Don't collect location data
Limit storage, retention, and sharing of location information including IP address.
No Search by Location
When abusers can discover content and accounts by location, the creators of content are in danger.
Use an OAuth Provider
Centralize identity management + risk with a company that thinks about it full time.
Ban Proactive Content Recommendation
Prohibit infinite feeds for children, and provide a universal opt-out for adults.
Deprioritize User Identity
In platforms where the identity of the participants isn't central, omit it.
Limit Content Reach
Put upper bounds on how many people can view/share/interact with content.
Subtly Modulate Uploads
Features that provide exact replicas of the data in are ripe for abuse.
Loading...