Violent extremism

AI and violent extremism

The relationship between AI and violent extremism is challenging and constantly changing. While AI can contribute to the proliferation of extremist content, it can also help counteract it.

AI for recruitment and radicalisation

AI is used by extremists for recruitment and radicalisation. Extremists can use AI to attract and radicalise new members. Extremists can identify those sympathetic to their ideology by using AI-powered algorithms. These algorithms analyse enormous volumes of online data to find patterns and signs of possible radicalisation. They seek out and attract weak people through careful targeting, eventually drawing them into extremist views.

AI for the creation and dissemination of extremist content

Furthermore, AI can be used for creating and disseminating extremist content. Natural language processing algorithms generate content that appears authentic. This content can spread extremist narratives through social media platforms, websites, and messaging apps. The quick spread of this material is made possible by chatbots and other automated systems, reaching a larger audience with less effort.

AI to fight extremist content online

AI also offers potential solutions in the fight against online extremism. Massive amounts of data can be analysed by machine learning algorithms, which enables law enforcement and internet companies to track and identify extremist activity. AI helps find and remove extremist materials from online platforms by recognising and prioritising harmful content. AI is used in counterterrorism operations to break up recruitment networks, monitor the dissemination of propaganda, and protect individuals vulnerable to extremism.

Learn more on AI Governance

The Christchurch shooting in March 2019, when a gunman killed 51 people in two mosques in New Zealand, sparked outrage across the world over what the New York Times described as ‘a mass murder of, and for, the Internet’. The attacker teased the shooting on Twitter, and announced it on the forum 8chan, posting his manifesto on both platforms minutes before the shooting, which he then live-streamed on Facebook. 

In the wake of the attack, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron launched the Christchurch Call, a voluntary global pledge to eliminate terrorist and violent content online.

What is (online) violent extremism? Violent extremism online can be defined as the use of the Internet to promote terrorist causes and to recruit terrorists. Terrorists use online propaganda to radicalise or to recruit supporters and new members, and even to inspire ‘lone wolf attacks’ (such as the Christchurch gunman, who is believed to have been radicalised). Online propaganda also contributes to the main goal of terrorist activities: disseminating fear in a society.

The online distribution of terrorist propaganda and violent extremist content has become a recurring theme in international politics, as well as a cause of concern for Internet companies. Terrorist groups have mastered the use of the Internet for propaganda, attempting to win the ‘information war’, especially through social media campaigns.

The terms ‘violent extremism’ and ‘cyberterrorism’ are often used interchangeably. Cyberterrorism is the use of the Internet for conducting cyber-attacks by terrorist groups (such as DoS attacks and hacking attacks), as well as for preparing and organising terrorist attacks. Political cyberterrorism concerns revolve predominantly around possible cyber-attacks on the critical infrastructure of society (critical infrastructure, critical information infrastructure, and critical Internet resources). While it is believed terrorist organisations do not currently have the capability to mount a major cyber-attack that could endanger critical infrastructure, it is also believed that it is a matter of time before they develop such capabilities.

Violent extremism differs from ‘hacktivism‘, which is the politically motivated use of the Internet to achieve a political agenda. While hacktivism involves the use of hacking tools and techniques of a disruptive nature, its aim is to disrupt normal operations rather than to cause significant economic damage or loss of life.

How is terrorist propaganda distributed? Terrorists have become increasingly skilful in using the Internet to support logistics for their organisations, such as purchasing weapons online. The use of publicly available anonymous proxy servers and anonymising services such as Tor to access the dark web, combined with money transfers through cryptocurrencies such as Bitcoin, leave few traces and make online surveillance and digital forensics highly complex.

In addition, increasingly secure mobile devices with cutting-edge encryption technology, and a variety of mobile applications for encrypted chat such as Telegram or Signal, provide safe ground for internal co-ordination by terrorists while avoiding communication interception by law enforcement.

The threat of online radicalisation has come into focus for many decision makers in various fora. The UN Security Council endorsed the International Framework to Counter Terrorist Narratives (S/2017/375) in Resolution 2354. In addition to the Convention on Cybercrime, the Council of Europe (CoE) adopted Convention on the Prevention of Terrorism. In addition, the CoE compiled a Database on Cyberterrorism to help mitigate cyberterrorist attacks. The EU emphasises the protection of critical infrastructure through Council Directive 2008/114/EC and the European Programme for Critical Infrastructure Protection.

Yet, due to the unique complexities of jurisdictions in which content may be posted, distributed, or read, as well as the possible breaches of freedom of expression by filtering or censoring online content, governments have not yet found a common global framework. The CoE Convention on Cybercrime is one of the few legal instruments with international reach that can be applied to some extent.

In addition to adhering to voluntary pledges such as the Christchurch Call, major online content and service providers – primarily Facebook, Twitter, and Google – have also been tackling the issue by creating partnerships to counter violent extremism online.

For instance, Tech Against Terrorism – a partnership between technology companies, governments, and the UN Counter-Terrorism Committee Executive Directorate (CTED) – helps the technology industry to prevent their services from being misused by terrorists. The partnership develops guidelines, and shares lessons learned and technical tools for the moderation of content.

The Global Internet Forum to Counter Terrorism (GIFCT) is an industry-led initiative that has been launched by Facebook, Microsoft, Twitter, and YouTube to curb the spread of terrorist content online. Following the Christchurch Call for eliminating terrorist and violent extremist content online, the GIFCT was reorganised into an independent organisation. Its aims are to equip digital platforms and civil society groups to develop sustainable programmes to disrupt terrorist and violent extremist activity online, to develop tools and capacity for platforms and other stakeholders to co-operate in mitigating the impact of a terrorist or violent extremist attack, and to empower researchers to study terrorism and counterterrorism.

As for the role of education in preventing violent extremism and the deradicalisation of young people, among the many initiatives are the UN Secretary-General’s Plan of Action to Prevent Violent Extremism, launched in 2015, which recognises the importance of quality education to address the issue, and UNESCO’s guide for policymakers on Preventing Violent Extremism through Education.

Key dimensions of preventing violent extremism

Governments and security services in many countries including the UK, France, and the USA, have been trying to introduce limits to the strength of encryption algorithms within mainstream products and services, and insert backdoors which would allow government agencies to access any encrypted data if necessary.

Civil society and human rights communities have voiced strong concerns about these developments. The practical operation of counter-extremist campaigns needs to be carefully balanced with the right to freedom of expression.

There is a delicate line between protecting security and promoting online censorship; establishing a balance is very much open to interpretation. This concern was highlighted by David Kaye, UN Special Rapporteur on Freedom of Expression, who argued that ‘violent extremism’ could be used as the ‘perfect excuse’ by governments to limit freedom of expression. The right formula for content policy – one that ensures the maximum possible level of freedom of expression while lowering radicalisation to a minimum – can only be found through continued dialogue between security and human rights communities.

However, the lack of a universally accepted definition of violent extremism can lead to misinterpretation and may have an impact on co-operation in mitigating threats and terrorist occurrences globally.