Tackling hate speech online: Ensuring human rights for all

27 Nov 2019 11:30h - 13:00h

Event report

[Read more session reports and updates from the 14th Internet Governance Forum]

Combating hate speech online remains a significant challenge for national and international policymakers, law enforcement agencies, human rights defenders, and IT companies. The state’s responsibility to promote and protect human rights online has been widely recognised, however, the question of how is unanswered. This panel discussion invited the panellists from the public, private, and civil society sector to foster cross-sectoral collaboration. The discussion focused on three topics: the distinction between freedom of expression and hate speech, hard and soft regulatory instruments in place, and the use of artificial intelligence (AI) to tackle hate speech online.

To protect freedom of speech and combat hate speech online is not an easy task as there is no universally agreed definition of hate speech. Norms in the North American and European settings do not necessarily translate to regions outside the Global North. Ms Saloua Ghazouani Oueslati (Regional Director, Article 19 Tunisia and the MENA region) highlighted the difference of understanding on hate speech in the Middle East and North America (MENA) region, where hate speech is broadly understood as speech that attacks the national identity of a state or national authority. Moreover, she pointed out a possible risk that policymakers in the MENA region could potentially apply laws and regulations on hate speech, produced in the Global North, to further undermine freedom of expression of the population whose rights have already been restricted. The understanding of fundamental rights should be first established before the enforcement of laws to protect human rights by tackling hate speech online.

As a platform host, IT companies are moderating online content. However, the information on who decides what content categorises as hate speech is not easily accessible. The lack of clarity and transparency on the standards that are internally established by IT companies provide insufficient assurance with the public that the companies conducting their due diligence for the platform users. To enhance the transparency of content-removal decision-making, Ms Alex Walden (Civil and Human Rights Specialist, Google) referred that content reviewers are making decisions on removal based on the community guidelines set up for each platform. At the international level, it is highly important to come to an agreement of what constitutes hate speech as it can improve the quality of training for content reviewers.

Regulatory mechanisms to tackle hate speech online have been largely voluntary actions from social media platform companies. However, in May 2016, the EU Code of Conduct on countering illegal hate speech online was adopted and was a collaborative work between the European Commission, Facebook, Microsoft, Twitter, and YouTube. In the monitoring and assessment process, Ms Louisa Klingvall (European Commission) explained that the Commission worked with approximately thirty NGOs to ensure that the Code of Conduct is respected. The collaborative approach between the public, private, and civil society actors has proven effective thus far as the response rate of the companies to the reported hate speech online has improved to 72% from 28%.

The panel highlighted the importance of the existing legal framework and its application to wrongdoings in online space. Mr Matthias Kettemann (Leibniz-Institute for Media Research | Hans-Bredow-Institut) underlined that it is integral for the intermediary actor (i.e. companies that host social media platforms) to uphold the existing hard laws in order to protect individual human rights as well as ensure social cohesion in online space. The enforcement of law is a complementary remedy to content removal.

AI has been one of the central players in removing hateful content from social media platforms. The best practice to monitor and review content online is the combination of machine and human reviewer. As hate speech is sometimes dependent on context, human reviewers who understand not only the local language but also the cultural and historical context is necessary. Fully relying on automated decision-making can lead to a situation where AI over-removes contents or makes biased decisions, which can restrict freedom of speech and expression. It requires a societal conclusion that stresses the ethical and human-centric use of AI to effectively tackle hate speech online.

The session briefly mentioned human rights of content reviewers, however, the overall focus was on human rights of victims of hate speech online and general users. The responsibility that is shouldered by content reviewers is enormous and can potentially cause mental and physical damage to them. The discourse on hate speech online requires a holistic approach that would enable the international community to ensure the attainment of human rights for all.

By Nagisa Miyachi