Hate speech is generally understood as language that promotes or incites violence, discrimination or hostility against groups based on personal attributes like race, ethnicity, religion, gender, sexual orientation, or other protected or sensitive characteristics. While such speech may be protected by law in most liberal democracies, it's unmitigated spread and amplification on digital platforms is both unwanted by the platform, and deeply harmful to the communities that are targeted.
The relative anonymity and self-sorting that online platforms provide fosters an environment where users can feel comfortable expressing or reveling in prejudices and animosities that are outside of the realm of mainstream acceptability. While these sentiments may build up and circulate in small online communities, their impact rarely remains there - hateful and dehumanizing speech online emboldens the readers and writers of it to action in the physical world. Even worse, the attention (and outrage) maximizing algorithms that social networks use to amplify engaging content can easily have the impact of amplifying hateful, bigoted, or violent rhetoric. As a particularly appalling example, hate speech against the Rohingya (an ethnic minority) spread and amplified by Facebook between 2012 and 2017 contributed to and accelerated the genocide that occurred in 2017. Hate speech on platforms is not a theoretical concern.