Content Analysis is the pattern of applying policy-defined interventions (like takedowns, de-amplification, or shadow-bans) based on automated analysis of the content. This is what most folks think of when they think about content moderation efforts, and it tends to take up a significant amount of the time and energies of teams working in Trust and Safety.
Though the implementation of a content analysis intervention can vary widely, it typically has the same constituent elements:
Though content analysis is widespread, and likely to be required in some form, it is often tasked with cleaning up messes that it is underpowered to tackle. If a platform incentivizes and actively promotes vitriol, no amount of content analysis can undo the negative consequences of that design choice. This site is making an intentional and narrow argument: platforms should rely less on Content Analysis, by rethinking and redesigning their platforms to be less capable of, and attuned to, the perpetuation of harm.