The challenges of online harms: can AI moderate hate speech?

8 Dec 2021 08:30h - 10:00h

Session page

Event report

Hate speech is a growing concern online. It can inflict harm on targeted individuals and political organisations; and stir up social conflict. However, it is very difficult to combat and mitigate its harmful effects. Often, no consensus can be reached on what hate speech is and at what point it becomes dangerous or illegal. Different cultural contexts can also add a layer of complication to the moderation of online content.

The sheer amount of online content makes it impossible to have 100% human moderation, so technology companies often rely on artificial intelligence (AI) tools. However, AI does not solve most of the issues that arise due to the complexity of content moderation. Mr Giovanni De Gregorio (Postdoctoral Researcher, University of Oxford) raised the issue that AI moderation of online hate speech is not only a technical question, but also a social issue. Since AI is implemented by humans, they are compounded with the same political, economic, and cultural issues of any human problem.

Mr Vincent Hoffman (Researcher at Leibniz Institute of Media Research and Humboldt Institute) addressed the legal perspective on online hate speech. He highlighted the fact that procedural legal issues and human rights issues must be taken into account in the context of applying AI content moderation. On the procedural side, a debate discusses whether AI moderation should follow the scheme of a legal process, allowing for appeal. For instance, Germany has passed a bill stating that Facebook can moderate content as long as they include procedural rights regarding what has been decided and offer the possibility to challenge the decision. 

Mr Lucien Castex (Representative for Public Policy of AFNIC and Researcher at Université Sorbonne Nouvelle) raised the issue of the design of algorithms behind AI content moderation. Frequently, these algorithms work as a black box: those who implement them do not understand what is going on behind the scenes and, so, they might be reinforcing bias or disfavouring minorities.

Ms Neema Iyer (Founder and Director of Pollicy, a civic technology organisation based in Kampala) brought the perspective of the Global South. She mentioned the difference in social media used in countries of the Global South; hence, regional differences should be taken into consideration when thinking about solutions to content moderation.  Iyer also mentioned gender issues related to AI content moderation. She has undertaken research on social media use in Uganda, in particular regarding how women in politics use social media. Her research reveals that while women politicians are likely to be targeted for features in their personal lives, men are criticised for their policy decisions.

The panel closed addressing the question of who should be responsible for the development and enforcement of hate speech moderation. The panellists agreed that this is a multistakeholder issue, in which private companies have the most responsibility, at least in terms of funding. However, that does not mean that they should work alone. Private companies must work together with civil society, keeping in mind the cultural context in which they are immersed. Governments should also have a voice, as they will hold companies accountable and claim enforcement. 

By Paula Szewach

Session in numbers and graphs

Most frequent noun chunksMost frequent names and entitiesWordcloudProminent verbs with adverbs

Automated summary

Diplo’s AI Lab experiments with automated summaries generated from the IGF sessions. They will complement our traditional reporting. Please let us know if you would like to learn more about this experiment at ai@diplomacy.edu. The automated summary of this session can be found at this link.