[Harm Category]
Coordinated inauthentic activity (CIA) refers to the manipulation or deception that coordinated accounts can achieve through synchronizing their efforts. This cooperation can be ascertained through the use of bot accounts, or through the collaboration between real users.
What unites CIA harms is not their outcomes, but the mechanism by which they derive their effect: because platforms typically aggregate data and input from their users, motivated users can strategically utilize the mechanisms that the platform devises for purposes that the platform doesn't feel are legitimate. What unites users can be common financial, ideological, or personal objectives, but the commonality is the degree to which they are jointly motivated and the unison with which they act.
Some examples of this include:
CIA is particularly challenging to combat because it often looks like the mechanisms of virility that platforms seek to engender and the fluctuation that they expect to occur. In fact, to many platforms only thinking about top-line metrics like growth or sales, Coordinated Inauthentic Activity can just look like things working well: it tends to align with the metrics (like engagement, user volume, and revenue) that platforms typically optimize for. As one example, ad marketplaces have struggled to deal with ad fraud in part because the incentive of the ad marketplace platform is closely aligned with the incentives of those conducting the fraud.
Today, typical mechanisms for combatting CIA on platforms have relied on behavioral analysis - looking for clusters of users who behave similar to one another, but have significant behavioral departures from the rest of the population of users. Over time we can expect these mechanisms to become less successful, since we can expect that the asymmetry between offense and defense in this area, paired with the strong incentives for the perpetrators, and lax incentives for the platforms, will yield equilibrium levels of CIA that are particularly high.