Like any tool of communication and collaboration, digital platforms tend to facilitate all forms of harassment: the act or pattern of threatening, intimidating, or humiliating someone else - sometimes called Cyberbullying when the parties are children. Platforms are uniquely susceptible to harassment for a few reasons:
While the motivations for harassment are various, and the degree of intentionality varries, harassment in digital spaces can be helpfully divided into two varieties:
While individual harassment may not be possible to fully prevent, there are a wide slate of well established design techniques that try to give users the capacity to block out other users, and limit their capacity to see and interact with them. These tools can be effective if they are readily accessible, and can be proactively suggested when harassing content or behavior is detected. However, since these tools typically operate on the 1:1 connection between users, they can be circumvented by motivated actors who have the capacity to create multiple accounts. Though these techniques can't prevent harm, they can help users feel more resilient to it, and through taking action against specific users, help them to regain a sense of safety and control over their experience on a platform.
While individual harassment can be expected and anticipated on any platform, when structurally supported harassment occurs, it's typically a signal that something is wrong with the platform's design. Platforms that see emerging signs of structurally supported harassment should ask themselves some hard questions about their business model and the design of their systems: