Messaging on online platforms refers to the exchange of digital messages between users. This feature can range from private one-on-one conversations to large-scale broadcasting, each requiring different rules and strategies for combatting misuse.
Users expect privacy in messaging; as online one-on-one conversations can adopt the closeness and confidence of one-on-one in-person conversations. Because of this, many platforms boast end-to-end encryption, which limits a platform's ability to oversee the content of messages. This makes content moderation in messaging particularly challenging. At the same time, users do expect the platform to protect them from universally unwanted forms of abuse like spam, phishing, and malware. Users' aversion to content moderation tends not to extend to automated and additive tools like spam filters or warning labels, which help users to sort through or contextualize messages without feeling that their privacy is violated. All of this presents challenges for platforms, because users' expectations of protection and privacy vary, despite the uniformity of the actions from the platform's perspective.
Balance between these competing objectives can often be achieved by leveraging metadata features of conversations to estimate user expectations. Between closely connected individuals in small groups, users expect more privacy. In larger conversations, or conversations with new connections, users expect more protection. One could use this observation about Affinity to restrict certain sets of scanning protections or rules. Similar differentiation can be applied via Gatekeeping or Graduated Features.
Messaging might seem like a domain where a platform has irreconcilable tradeoffs to make: to implement End-to-End encryption or not, for example. However, when interventions are implemented that tease apart the many contexts in which users message one another, balance can be found.