Definition: Actively prevent automated use of a feature.

One of the most potent vectors for harm on the internet is the use of automation. Platforms that lack robust defenses can quickly become overrun by inauthentic content, as a single malicious actor with some basic knowledge of coding can quickly create accounts and content at a rate that platforms are powerless to stop. Whether it is spam, malware, self promotion, almost any form of deliberate harm on the internet is exacerbated by the use of automation.

Beyond ensuring that visitors to a site are humans, or validating human status when creating an account, platforms also have control over the degree of automation that they encourage through the technical interfaces that they expose. Platforms that offer free APIs open themselves up to the possibility of scripting in a way that platforms that don't offer these types of machine readable interfaces do not. More technically, platforms that have guessable content IDs enable scrapers to be more efficient and harder to detect than platforms that have encrypted or well randomized URL patterns.

Today, building in processes to prevent automated access is fairly straightforward - tools like ReCaptcha offer industry-standard mechanisms that validate that the user on the other end of a browser connection is a human. While these heuristic and task-based approaches to identifying bots work effectively today, there is growing concern that the rapid progress in AI is going to undermine each approach in turn, since it's becoming harder and harder to find isolated tasks that humans can easily do that machines cannot.

In light of this, many more security sensitive applications like banking and cloud service hosting are turning to more robust mechanisms of confirming human identity, like trying to tie a user to a unique real-world identity like using SMS verification, physical letters mailed with confirmation codes, or in the most extreme cases, government ID verification. These approaches are more robust to scaled abuse, but bring along with them serious baggage in the form of high costs and privacy intrusions.

Interventions using this Approach

Identity Verification
Require users to register for an application with a state issued identity document.
Recommend Only Verified Users
Require identity verification before adding content from an account to a recommendation engine.
Limit account volume
Reducing the volume of accounts a person can create restricts their capacity to cause harm at scale.
Require Labels on AI Created Content
Enact legislation for the mandatory prominent disclosure of AI generation.
Loading...