| Phenomena | Hate speech in social media

Knots in the knowledge map

Disziplin

Communication studies

Hate speech in social media

Reading time: 7 min.

Hate speech in social media refers to symbolic attacks on digital platforms such as Instagram, Facebook, X (formerly Twitter), TikTok and YouTube that are specifically directed against individuals and groups based on their identity.

The term hate speech dates back to the 1980s. It was originally coined by law professors from Critical Race Theory to record and legally punish hate speech on university campuses and in public communication[1], [2].

With the rise of social media in the 2000s, the prevalence and perception of hate speech changed dramatically. The platforms lowered the access barriers to public expression[3] and thus facilitated the spread of discriminatory and radicalizing content.

The term hate speech has become popular with the public, but is often incorrectly used as a synonym for general digital hate and offensive content.[4] This has led to confusion, as not all content subsumed under the term constitutes hate speech. The extension of the term to all forms of digital violence also makes it difficult to specifically combat and regulate actual hate speech.[5] Furthermore, equating online hate speech with digital violence helps to ignore discrimination and power asymmetries as the roots of the problem.[6]

Many pluralistic-democratic societies, including Germany with the Network Enforcement Act (NetzDG), have introduced legal regulations to regulate user-generated content in social media and to enforce existing rules digitally. The introduction of such regulations has reignited the old debate about freedom of expression and censorship. While protection against hate speech is being strengthened, there is also concern about possible restrictions on freedom of expression.

This brings the question of what exactly can be identified as hate speech into focus. Despite modern content analysis technologies, there is no standardized definition of hate speech that can be used by both researchers and the courts. Researchers from the computer social sciences are working on solutions for identification, but without a clear consensus.

In 2020, the United Nations published a plan to combat hate speech, which is often used as a guide. In it, the UN defines hate speech as “any form of communication, whether written, oral, visual or behavioral, that insults and/or attacks people on the basis of an identity factor such as phenotype, gender, sexual orientation, origin, religion, etc.”[7].

This definition makes it clear that hate speech cannot be reduced to insults and taunts in online discussions. Rather, hate speech is a form of strategic communication that aims to attack people or groups because of who they are and not because of what they do professionally. This distinguishes hate speech in social media from other forms of digital violence.

Comparability with analogue phenomena

Hate speech in face-to-face situations usually refers to identity-related insults and abuse (hateful speech) rather than hate-inspiring speech, the legal equivalent of which is justiciable in Germany as incitement to hatred.

Hate speech in the mass media, on the other hand, is less often based on insults and tends to take the form of disputes or controversies, such as debates about the equal value of people based on their origin, religion, gender, skin color, etc. Prominent examples of this are Oriana Fallaci in Italy or the polemics against Muslim migrants initiated by Thilo Sarrazin in Germany.

On the one hand, social media provides a platform for attacks and offensive and offensive statements, including xenophobic, racist, sexist, Islamophobic, anti-Semitic abuse or insulting statements (hateful speech). On the other hand, they also serve as platforms to incite hatred, discrimination and contempt against such persons and groups (hate speech).

Unlike in face-to-face situations, hate speech in social media is neither time-bound nor location-bound. Unlike mass media, this content continues to circulate online and can be reactivated at any time (ubiquitous availability). Social media also changes the language of hate messages, e.g. through GIFs, memes and emojis.[8] Digital platforms also offer opportunities to interact with hate messages in the form of likes, sharing content, posting comments and hashtagging. This increases the complexity of identification and the detection of hate messages, which in the digital environment do not necessarily have to consist of easily identifiable symbols such as swastikas or racist slurs (obfuscation).

Such interactions in turn lead to users coming into contact with other content and actors by the platforms recommending similar content and users. In this way, social media enable their users to network around this content (networking) and considerably simplify the possibilities for mobilization[9], [10].

These changes show a fundamental difference between analog hate speech and online hate speech: while the former is a social problem, the latter is a socio-technical problem. Online hate speech is not only the result of users’ actions and preferences, but is also influenced by what the platforms allow or do not allow, what formats they offer and what content they give more visibility to[11]. This has implications for the regulation of content: Unlike with analog hate speech, the responsibility for hate speech on social media no longer lies solely with the state. Instead, the platforms themselves determine what they define as hate speech and what content they ban[12].

Social relevance

Hate speech has physical and psychological consequences for those affected. Empirical studies show that people who are attacked smoke more often, suffer more from depression and have higher suicide rates[13].

Hate speech on social media also significantly increases exposure to such content, especially among young people: 79% of internet users in Germany have come across hate comments. The 14 to 25-year-olds are particularly affected.[14] The more frequently users are exposed to hate speech, the less sensitive they become to such content and the more prejudices they develop towards those who become the target of hate speech.[15]

The consequences for the functioning of pluralistic democratic societies are also serious: hate speech sows mistrust and hostility between social groups and leads to disenchantment in political deliberation processes. In addition, hate speech can instigate, legitimize and/or coordinate physical, open violence against groups. It is undisputed that hate speech repeatedly appears on social media in connection with ethnic conflicts and even genocides such as the one in Myanmar in 2018, whether as a cause, an accompanying phenomenon or an indicator of escalation.[16], [17], [18], [19], [20]

This social development has recently alarmed authorities and politicians and is increasing the pressure on digital platforms that operate social media to take action. Hate speech is therefore currently one of the central points of contention when it comes to the regulation of platforms.

Further links and literature

Recommended reading:

  • Schneiders, P. (2022). Hate Speech auf Online-Plattformen. UFITA Archiv für Medienrecht und Medienwissenschaft, 85(2), 269–333.
  • Sponholz, L. (2021). Hass mit likes: Hate Speech als Kommunikationsform in den Social Media. In: Hate Speech. Multidisziplinäre Analysen und Handlungsoptionen: Theoretische und empirische Annäherungen an ein interdisziplinäres Phänomen, 15–37. https://doi.org/10.1007/978-3-658-31793-5_2
  • Strippel, C. et al. (2023). Challenges and perspectives of hate speech research. https://doi.org/10.48541/dcr.v12.0

Sources

  1. Sponholz, L. (2020). Der Begriff „Hate Speech“ in der deutschsprachigen Forschung: eine empirische Begriffsanalyse. In: SWS-Rundschau 60(1), 43–65.
  2. Vilar-Lluch, S. (2023). Understanding and appraising ‘hate speech’. In: Journal of Language Aggression and Conflict, 11(2), 279–306.
  3. Engesser, S. et al. (2017). Populism and social media: How politicians spread a fragmented ideology. In: Information, Communication & Society 20(8), 1109–1126.
  4. Schmidt, A./Wiegand, M. (2017). A Survey on hHate Speech Detection using Natural Language Processing. In: Proceedings of the Fifth International Workshop on Natural ILanguage processing for Social Media, 1–10.
  5. Sponholz, L. (2020). Der Begriff „Hate Speech“ in der deutschsprachigen Forschung: eine empirische Begriffsanalyse. In: SWS-Rundschau 60(1), 43–65.
  6. Matamoros-Fernández, A./Farkas, J. (2021). Racism, Hate Speech, and Social Media: A Systematic Review and Critique. In: Television & New Media 22(2), 205–224.
  7. United Nations (2020) United Nations strategy and plan of action on Hatespeech: Detailed Guidance on Implementation for United Nations Field Presences
  8. Al-Rawi, A. (2022). Hashtagged trolling and emojified hate against Muslims on social media. In: Religions 13(6), 52.
  9. Sponholz, L. (2019). »Hate Speech in Sozialen Medien: Motor der Eskalation?. In: Friese, H./Nolden, M./Schreiter, M. (Hg.). Rassismus im Alltag. Theoretische und empirische Perspektiven nach Chemnitz. Bielefeld, 158–178.
  10. Bennett, W. L./Segerberg, A. (2012). The Logic of Connective Action: Digital Media and the Personalization of Contentious Politics. In: Information, Communication & Society 15(5), 739–768.
  11. Gillespie, T. (2022). Do not recommend? Reduction as a form of content moderation. In: Social Media + Society 8(3).
  12. Land, M. K./Wilson, R. A. (2020). Hate speech on social media: Content moderation in context. In: Connecticut Law Review 52, 1029–1076.
  13. Sponholz, L. (2018). Hate Speech in den Massenmedien: Theoretische Grundlagen und empirische Umsetzung. Springer VS.
  14. Hate Speech Forsa-Studie 2023.
  15. Soral, W./Bilewicz, M./Winiewski, M. (2018). Exposure to hate speech increases prejudice through desensitization. In: Aggressive behavior 44(2), 136–146.
  16. Müller, K./Schwarz, C. (2021). Fanning the Flames of Hate: Social Media and Hate Crime. In: Journal of the European Economic Association 19(4), 2131–2167.
  17. Fink, C. (2018). Dangerous sSpeech, Anti-Muslim vViolence, and Facebook in Myanmar. In: Journal of International Affairs 71(1.5), 43–52.
  18. Chekol, M. A./Moges, M. A./Nigatu, B. A. (2023). Social media hate speech in the walk of Ethiopian political reform: Analysis of hate speech prevalence, severity, and nature. In: Information, Communication & Society 26(1), 218–237.
  19. marasingam, A./Umar, S./Desai, S. (2022). “Fight, Die, and if Required Kill”: Hindu Nationalism, Misinformation, and Islamophobia in India. In: Religions 13(5), 380.
  20. Kimotho, S. G./Nyaga, R. N. (2016). Digitized Ethnic Hate Speech: Understanding Effects of Digital Media Hate Speech on Citizen Journalism in Kenya. In: Advances in Language and Literary Studies 7(3), 189–200.