Intervention:

Anonymous Limitations

Definition: Require users to create an account before they can use features that create data or interact with others.
Kind of Intervention:
Graduated Features
Reversible:
Easily Tested + Abandoned
Suitability:
General
Technical Difficulty:
Straightforward

This intervention is so straightforward that we're used to seeing it (and expect it) on most of the internet already: require people to sign-in in order to take some actions.

This straightforward approach uses a user's choice to create an account and log in as a mechanism of trust, and rewards that trust with additional functionality. This has several benefits:

  1. Though not impossible, it is much harder to automate actions or scrape content off of a site that requires login.
  2. Account creation creates a checkpoint to ensure the user is a human, after which they can be assumed to be. This is why seeing or completing a ReCaptcha during account-creation is so common.
  3. Account creation is an implicit warning that the account (and thus the access to the features that account creation enabled) can be deleted, and the access to the features withdrawn.
  4. Account creation, and then subsequent correlation of activity by the account leaves a trail of metadata that can be helpful for identifying abuse, and reporting it to law enforcement.

Limiting powerful features to logged-in users does not mean not allowing anonymous access. Anonymous access and use can be safe in a variety of contexts. However, the presence of any of these features should generally be behind some kind of authentication barrier:

  • The ability to persist data to the platform
  • The capability to take actions that can impact what other users see
  • The functionality to incur nontrivial resource costs.

These are guidelines, not rules. Well defended systems and well designed systems can sometimes offer narrow instantiations of these functionalities safely.

Is something missing, or could it be better?
Loading...