Intervention:

Right-size content visibility

Definition: Place limits on the amount of harm content can cause on a platform by restricting its reach.
Kind of Intervention:
Graduated Features
Reversible:
Easily Tested + Abandoned
Suitability:
General
Technical Difficulty:
Challenging
Legislative Target:
Yes

For content based harms, access volume matters

For most content based harms, the magnitude of harm typically stems not from the content itself, but scales with the volume of people that access the content, since the thing that causes harm is the person consuming the content, rather than the content simply existing. As an example, the financial harm that accrues to copyright holders is approximately linear with then number of people that gain access to the content illegally - this is the number of potential customers the IP holder has missed out on. This type of thinking can be applied to a wide range of (but not all) content based harms, including such diverse harms as copyright, malware, violent extremism, deepfakes, and health misinformation. 

Content sharing features exist along a spectrum of visibility

Platforms provide users the capacity to share content with one another, but do so under different paradigms of content sharing which fall along a spectrum of visibility.

  1. Storage only - content is only accessible to the user that created/uploaded it.
  2. 1:1 Sharing - content can be shared with individual other users.
  3. 1:Many sharing - content can be shared with groups of users, or shared with many users easily.
  4. Public - content can be published such that anyone has the capacity to see it.
  5. Public + Searchable - content is public, and can be easily found by a user that doesn't yet know that it exists.
  6. Recommended - content is not only public, the platform recommends it to users, leading users to see the content even without their intent to do so.

Since a platform deciding which of these modalities of feature sharing they want to support is a design choice, tweaking which of these features are enabled is a way to dramatically reduce the potential for harm on a platform.

Align visibility features to trust

This leads us to a general principle of the design of content-sharing systems: Place limits on the visibility of a user's content aligned with their level of trust.

This provides a flexible way of thinking about product restrictions that limits the capacity for users to use platforms to generate harm, with a simple mad-lib formulation: "only allow visibility feature when a user has signal of trust".

 As some examples of how this could be manifested as product rules/features:

  • Require users to be logged in to upload content. (This is particularly important for CSAM, as an example)
  • Don't allow new users to share content with other users unless they have a prior connection.
  • Restrict the size of groups that users can share content into.
  • Don't allow a user to post more than 1000 public items.
  • Only enable a user's content to be searchable if their account has existed for at least a year.
  • Only recommend content that comes from accounts that have been ID verified.

And... maybe don't?

While we often can responsibly use trust signals to restrict visibility features, sometimes there is legitimate question as to whether or not a platform should even support a given visibility level:

  • Should an end-to-end encrypted discussion platform EVER recommend channels to its users? Probably not. 
  • Should a porn site ever allow an anonymous user to upload videos? Probably not.
  • Should content from anonymous users ever be recommended? Probably not.

As always, the most surefire way to mitigate harm is to think carefully about what the bounds of expected and acceptable behavior should be for an application, and to expressly constrain users' on-platform behavior to what is expected.

Is something missing, or could it be better?
Loading...