[Read more session reports and live updates from the 12th Internet Governance Forum]
Ms Anriette Esterhuysen, Director of Global Policy and Strategy, Association for Progressive Communications, introduced the panel and explained the issues addressed by the workshop. Online hate speech has become a policy concern for decision makers, who often introduce countermeasures that reduce openness and freedom of speech.
Ms Gayathry Venkiteswaran, PhD candidate, University of Nottingham Malaysia Campus, introduced the report Let the mob do the job focusing on Bangladesh, India, Malaysia, and Pakistan. The report looks at targeted attacks committed in the name of defending religion or beliefs against people who exercise their freedom of expression. The report findings show that national legislation enables an environment in which blasphemy and insults to religion are widely used as justification for violent attacks.
Ms Chinmayi Arun, Associate Professor, Centre for Communication Governance, National Law University, New Delhi, India, suggested additional recommendations to the report. These include more forums discussing national practice, and closer international monitoring resulting in an index. Arun gave an Indian perspective and national context of hate speech, saying that national law is too broad, and court cases are influenced by the interpretations of judges, while criminal code procedures are not transparent. Other seemingly unrelated laws, such as tax laws, support censorship.
Mr Carlos Affonso de Souza, Director, ITS Rio, talked about the recent Brazilian queer art exhibition which was cancelled after the successful online campaigns of conservative groups. The whole action resulted in the discussion of laws defining proper standards of artwork eligible for public spaces. As a side effect, hate speech and fake news issues might lead to stricter content control policies.
Ms Grace Githaiga, Associate, KICTANet, focused on developments in hate speech that saw a significant rise as a political issue during the 2017 elections in Kenya. According to Githaiga, the Kenyan commission combating hate speech was biased in favour of the ruling political party.
Mr Wolfgang Schultz, Director, Hans-Bredow-Institut, talked about the roles and responsibilities of Internet intermediaries, and provided examples from Germany. He mentioned both some of the shortcomings and the positive aspects of the new hate speech and fake news legislation in Germany, effective January 2018.
Ms Susan Benesch, Director, Dangerous Speech Project, followed up on previous speakers by saying that the law can be useful, but sometimes introduces inadequate mechanisms to fight hate speech and fake news. She pointed out that the term ‘hate speech’ is rarely used in national legislation and is very often vaguely defined. Because it is hard to define what inappropriate content is, it is important to focus on defining what type of harm society is trying to prevent. Finally, an overview should be conducted with all intermediaries regarding their hate speech policies and moderation practice.
Mr David Kaye, UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, stressed that it is obvious that hatred is present online, but the question is how to regulate it. Kaye focused on the definition of hate speech. He pointed out that law enforcement authorities treat cyberspace as a jurisdiction-free zone, and are selective, when they should consider hateful content as a real threat. He also suggested the need for transparency in application of content moderation rules used by intermediaries, and the necessity of consistency.
The audience discussion tackled questions on the role of big data and algorithms in the moderation process; whether different types of hate speech should be treated in different ways; and the difficulties intermediaries experience in assessing and recognising hate speech.
by Radek Bejdak