Harm:

Coordinated Inauthentic Activity

Definition: Manipulating online platforms through coordinated creation of inaccurate data.
Legal Status:
Can be illegal
Platform ToS:
Allowed by Policy
Victim Visibility:
Unaware
Classification:
Contextually Sensitive
TSPA Abuse Type:
Scaled Abuse: Inauthentic Behavior

[Harm Category]

Coordinated inauthentic activity (CIA) refers to the manipulation or deception that coordinated accounts can achieve through synchronizing their efforts. This cooperation can be ascertained through the use of bot accounts, or through the collaboration between real users.

What unites CIA harms is not their outcomes, but the mechanism by which they derive their effect: because platforms typically aggregate data and input from their users, motivated users can strategically utilize the mechanisms that the platform devises for purposes that the platform doesn't feel are legitimate. What unites users can be common financial, ideological, or personal objectives, but the commonality is the degree to which they are jointly motivated and the unison with which they act.

Some examples of this include:

  1. Coordinated reviews, likes, or downvotes, attempting to sway public perception of content (also known as feedback bombing or brigading)
  2. Fake political or ideological groups, created to engender outrage and reaction, a form of ideological misinformation.
  3. Coordination between participants in an online marketplace, like what is sometimes seen by ride share drivers around airports, to temporarily boost prices.
  4. A cluster of websites that all link to one another to attempt to sway search ranking algorithms.

CIA is particularly challenging to combat because it often looks like the mechanisms of virility that platforms seek to engender and the fluctuation that they expect to occur. In fact, to many platforms only thinking about top-line metrics like growth or sales, Coordinated Inauthentic Activity can just look like things working well: it tends to align with the metrics (like engagement, user volume, and revenue) that platforms typically optimize for. As one example, ad marketplaces have struggled to deal with ad fraud in part because the incentive of the ad marketplace platform is closely aligned with the incentives of those conducting the fraud.

Today, typical mechanisms for combatting CIA on platforms have relied on behavioral analysis - looking for clusters of users who behave similar to one another, but have significant behavioral departures from the rest of the population of users. Over time we can expect these mechanisms to become less successful, since we can expect that the asymmetry between offense and defense in this area, paired with the strong incentives for the perpetrators, and lax incentives for the platforms, will yield equilibrium levels of CIA that are particularly high.

What features facilitate Coordinated Inauthentic Activity?

Subscriptions
Allows a user to receive by request another channel's novel content.
Feedback Aggregation
Collecting user feedback, and condensing it into quantitative scores.
Search
Locating and ranking content to be responsive to a user's query.

How can platform design prevent Coordinated Inauthentic Activity?

Identity Verification
Require users to register for an application with a state issued identity document.
No Comments for Fresh Accounts
Don't accept comments from newly created or barely used accounts.
Limit account volume
Reducing the volume of accounts a person can create restricts their capacity to cause harm at scale.
Anonymous access makes automation easier to achieve.
Anonymous Limitations
Require users to create an account before they can use features that create data or interact with others.
Temporal Comment Limits
Restricting the volume of comments people can post will make folks think twice about their actions.
Is something missing, or could it be better?
Loading...