AI firms rally behind Ardern’s Christchurch Call to fight extremist content online

Former PM Jacinda Ardern has been actively involved in the Christchurch Call, leveraging AI in the fight against extremist content. She has emphasized the potential uses of AI in reducing terrorist activity and improving content moderation.

 Clothing, Hood, Light, Fashion, Helmet

New Zealand’s former Prime Minister, Jacinda Ardern, has publicly endorsed the use of artificial intelligence (AI) in the fight against online extremist content.


AI can play a significant role in content moderation, which involves the approval or rejection of user-generated content based on community guidelines and terms of service. AI can search for, analyse, flag, and eliminate content that violates these rules, including offensive and obscene content or that may promote violence. Said the former PM in an interview with Axios.

This public endorsement is a part of the Christchurch Call, an initiative to address the spread of violent extremism online. Following the terrorist attacks of 15 March 2019 on the Muslim community of Christchurch (NZ), Ardern launched the ‘Christchurch Call’ with French President Emmanuel Macron. She is now a Special Envoy for the Christchurch initiative, which aims to eradicate terrorist and violent extremist content online. Over 50 countries and several tech companies, including Meta, Amazon, Google, and Microsoft, have joined the initiative.

The Christchurch Call also addresses the impact of algorithms on the spread of terrorist and violent extremist content. The initiative has committed to understanding the role of online activity in radicalization and the impacts of algorithms on content dissemination. This includes efforts to protect user privacy and proprietary information while studying the impacts of algorithms on extremist content.

Why does it matter?


At a leaders summit in Paris last week, AI firms, including OpenAI, Discord, Anthropic, and Vimeo, joined the Christchurch Call to Action to suppress terrorist content in the wake of the Hamas-Israel conflict.
New Zealand is taking the lead in co-designing AI’s regulatory framework through inclusive national participation. The aim is to develop a domestic comprehension of AI to produce well-informed policies, mitigate the risks associated with AI systems, and maximize their benefits for the common good.
With a growing AI sector, the country is developing a national AI strategy to promote trust and transparency. While AI can be a powerful tool for addressing extremist content, it’s not a flawless solution. AI systems have to be accountable, transparent, and free from bias. There is still a need for hybrid automation with ‘human in the loop’ in content moderation.