Definition: Commercial offerings that intentionally deceive their participants.
Legal Status:
Almost always illegal
Platform ToS:
Violates Policy
Victim Visibility:
Contextually Sensitive
TSPA Abuse Type:
Deceptive & Fraudulent Behavior: Fraud

Internet platforms tend to enable scams to spread and thrive, both because of the anonymity and global reach that the internet provides, and the capacity to scale up scam operations by utilizing automation that platforms offer. In concert, these forces often enable scams with even infinitesimally small rates of successful victimization to become profitable enterprises.

Scams on the internet range a gamut from highly personalized frauds designed to appeal to the online profile of an individual target, all the way to spam emails proffering goods to any email address that will accept delivery. Additionally, scams come in all shapes and sizes on platforms: they're often directly spread through messages, try to game recommendation algorithms, and abuse search tools to find potential victims. Despite this range in tactics, approach, and mechanisms, because scams are consistently financially motivated, a simple market-derived idea points true: if platforms can slightly increase the cost to using a platform for a scam, or slightly decrease the risk that a potential victim will bite, the scammer will seek out better returns elsewhere.

This, and a few other commonalities within scams, point to mechanisms by which they can be stymied in online environments:

  1. Scams are allergic to feedback. Mechanisms of review, comment, and crow-sourced ratings are all ways by which scams can be more easily found out by the user base at large, without platform intervention.
  2. Scams are typically transacted between strangers. Platform features that filter or warn based on affinity can dramatically decrease the success rates of scams, and thus the degree to which they will make their home on a platform.
  3. Scams typically use lots of new accounts. Efforts to limit account volume, or leverage account age and history as a signal of safety tend to have an outsized impact on how easy it is for a scammer to do their work.

It's also worth noting: as large-language models proliferate the capability to build lots of high quality human-sounding speech, content analysis techniques are going to become less and less valuable for detecting and acting upon scams. It's essential that platforms start thinking about how to build their systems to safeguard against scams from the lens of features, and soon.

What features facilitate Scams?

Internet platforms increase the potential pool of marks for scammers, while limiting the likelihood of their accountability.
Enable users to exchange text in real time.
The capacity for users to initiate outgoing payments or receive incoming ones.

How can platform design prevent Scams?

User Submitted Reviews
Platforms that enable repeat engagement can benefit from allowing users to learn about the past experiences of other users.
Because many scams rely on bulk distribution to be successful:
Limited Number of Subscriptions
By limiting the number of subscribers (or the number of subscriptions), a platform can design toward real-world connections, and away from exponential scale distributions.
Two-Factor Payment Identification
Require one user to specify multiple pieces of information about another user to send them a payment.
Reporting Mechanisms
Allow users to flag content or behavior that they find to be abusive.
Is something missing, or could it be better?