Addressing terrorist and violent extremism content online

27 Nov 2019 11:15h - 13:15h

Event report

[Read more session reports and updates from the 14th Internet Governance Forum]

Social media platforms allow for a vast amount of content to be shared rapidly with a global audience. On the positive side, this has favoured the spread of democratic values, increased people’s possibility to participate in and engage with society on a variety of issues and forms. Ms Jacinda Ardern (Prime Minister of New Zealand) stressed the power for good that the information and communications technology (ICT) offers to us all in her video remarks. Such positive aspects can be wrongly used to amplify the spread of hateful, violent, and terrorist content. She recalled the Christchurch terrorist attacks of March 2019 where social media was used to livestream a terrorist attack on the civilian population. In addition to the timely blocking of the streaming and the removal of the video, a more long-term strategy was needed. Together with the French government, New Zealand has launched the Christchurch Call to eliminate terrorist and violent extremist content online. Such a call outlines the responsibilities of the governments and the online platforms, and it is open to contributions from civil society and academia.

The structure of the session developed around two main topics: responsibilities and responses of the relevant stakeholders on the one hand, and risks and rights involved in countering violent extremism online on the other.

Responsibilities and responses of governments and the private sector.

To ensure that the Internet remains free, open, and secure, it is necessary to include all relevant stakeholders in the discussion and to clearly identify their roles and responsibilities in tackling violent extremisms. This is far from being an easy task, starting from the problem of agreeing on a definition of responsibility. Each actor has a different understanding of what responsibility entails. As Mr Paul Ash (Acting Director, National Security Policy Directorate, Department of Prime Minister and Cabinet. New Zealand) pointed out, this is one of the reasons behind the Christchurch Call: finding a common understanding around the responsibilities of governments (e.g in terms of public safety), of the private sector as a service provider, and of academia and civil society as actors that can participate and inform the discussion. Ms Courtney Gregoire (Chief Digital Safety Officer Microsoft Corporation) joined Ash in praising the Christchurch Call for attempting to clarify the roles of each stakeholder involved in what is a problem for the whole of society.

The lack of clarity in defining responsibilities is also a regulatory problem. Gregoire further considered that with a ‘patchwork quilt of laws’ that is present at the global level it is difficult for companies to assess the true risks and thus formulate the appropriate global responses. The German government has been communicating with online platforms to acquire more information on the magnitude of the issue, namely to know how much content that is flagged is taken down, and how the platform’s complaint management system works. Mr Gerd Billen (State Secretary, Federal Ministry of Justice and Consumer Protection, Government of the Federal Republic of Germany) explained that Germany’s approach is based on the Network Enforcement Act which specifically demands more transparency from platforms on their content filtering mechanisms. Moreover, this approach also demands that platforms delete content that is flagged as being a hate crime within 24 hours from the flagging.

The United States has taken a double approach, consisting of the short-term removal of content and building long-term resilience to the terrorist messages. Ms Sharri Clark (Senior Advisor for Cyber and Countering Violent Extremism, US Department of State) further explained that the US approach is guided by three main principles: the content is removed only if it violates national law (e.g. in the case of child pornography); a strong collaboration between the tech companies and the government; and, the promotion of counter-narratives to terrorism based on tolerance and critical thinking. Finally, the USA also engages in international collaboration in fora, such as the IGF.

From the private sector point of view, different measures have been taken. Facebook is currently following a 5-factor policy which varies from the enforcement of the community standards and the engagement with the law enforcement authorities, to the collaboration with other actors in the industry. Mr Brian Fishman (Policy Director, Counterterrorism, Facebook) also mentioned Facebook’s role in the Global Internet Forum to Counter Terrorism (GIFCT) as a positive example of collaboration among online platforms in tackling terrorist content through the exchange of best practices and offering training programmes to small platforms.

Risks and threats to human rights

Moderating terrorist and violent content is probably one of the most difficult areas of content regulation. This is partly due to the compatibility of current policies and technical tools with the existing human rights framework. Mr Edison Lanza (Special Rapporteur on Freedom of Expression, Inter-American Commission on Human Rights) stressed that censorship is not an effective response to violent extremism, and that any content filtering policy has to meet the requirements of legality, proportionality, and necessity.

On the human rights side, restrictions on freedom of expression are probably the most evident byproduct of content policy regulations. Clark explained that in the USA any regulation has to be consistent with the constitution and its amendments, the first of which protects freedom of expression. Under this framework, for example, expressing extremist language is not illegal. She further explained that censorship could even prove itself counterproductive as it may restrict excessively innovation and commerce.

Gregoire further added that we should also consider that excessively restricting policy can impact other human rights, such as the right to privacy and the right to Internet access.

On the risks and challenges side, Mr Yudhanjaya Wijeratne (Team Lead – Algorithms for Policy, LIRNEasia) invited the audience to consider the technical aspects before addressing policy concerns. He explained that there is a design problem in algorithms that filter hateful content because they are not trained with sufficient data in many languages of the Global South, and thus they are not refined in distinguishing harmful words or flag them in relation with the cultural context. He therefore called for more engagement and collaboration between platforms detaining the datasets and local expertise, and the academic sector. Finally, he explained that it is impossible to have technical systems without biases, and therefore, it is fundamental that datasets are open for a multidisciplinary interrogation on solutions.

The discussion continued with some speakers expressing concerns over excessively restricting content regulations as the pressure on the platforms is so high that their behaviour results in excessive removal of content. Billen counter-argued that companies are not taking down all the content that is flagged, but only an average of a fourth of the flagged content. Gregoire also noticed that an excess of regulation of the national level may cause fragmentation on the policy side and make it difficult for companies to respond to requests coming from different countries and legal frameworks.

By Marco Lotti