Open Forum #68 Countering the use of ICT for terrorist purposes
Open Forum #68 Countering the use of ICT for terrorist purposes
Session at a Glance
Summary
This discussion focused on countering the use of information and communication technologies (ICT) for terrorist purposes. Representatives from various organizations, including the UN Counter-Terrorism Committee Executive Directorate (CTED), the Parliamentary Assembly of the Mediterranean (PAM), the UN Office on Drugs and Crime (UNODC), Tech Against Terrorism, and the Global Internet Forum to Counter Terrorism (GIFCT), shared their perspectives and initiatives.
The speakers highlighted the evolving nature of terrorist threats in the digital space, including the exploitation of social media, video games, and emerging technologies like artificial intelligence. They emphasized the need for a multi-stakeholder approach involving governments, tech companies, civil society, and academia to address these challenges effectively.
Key initiatives discussed included CTED’s work on developing guiding principles for member states, PAM’s efforts to promote dialogue and legislation on AI regulation, UNODC’s Global Initiative on Handling Electronic Evidence, Tech Against Terrorism’s focus on disrupting terrorist use of the internet, and GIFCT’s cross-platform solutions for tech companies.
The speakers stressed the importance of balancing counter-terrorism efforts with respect for human rights and fundamental freedoms. They also highlighted the need for improved international cooperation, capacity building for law enforcement and judicial systems, and the development of legal frameworks to address crimes committed through or by AI.
The discussion underscored the critical role of public-private partnerships in countering terrorist use of the internet. Speakers emphasized the need for continued collaboration, knowledge sharing, and adaptation to emerging threats in the rapidly evolving digital landscape.
Keypoints
Major discussion points:
– The increasing use of the internet and emerging technologies by terrorist groups for recruitment, radicalization, and strategic communications
– Challenges faced by governments and tech companies in countering terrorist use of the internet, including legal/jurisdictional issues and capacity gaps
– The importance of public-private partnerships and multi-stakeholder collaboration in addressing these challenges
– The need for improved detection and analysis capabilities, including potential benefits of AI for content moderation
– Concerns about terrorist-operated websites and infrastructure
Overall purpose:
The goal of this discussion was to examine current trends, challenges and collaborative efforts to counter terrorist use of the internet and emerging technologies from the perspectives of various stakeholders including the UN, governments, tech companies and NGOs.
Tone:
The overall tone was serious and focused, reflecting the gravity of the topic. Speakers maintained a professional, analytical approach while emphasizing the urgency of addressing these issues. There was also an underlying tone of cautious optimism about the potential for improved collaboration and technological solutions to make progress in this area.
Speakers
– Jennifer Bramlette: Executive Director of the Counterterrorism Committee Executive Directorate (CTED)
– Pedro Roque: Vice President of the Parliamentary Assembly of the Mediterranean (PAM)
– Arianna Lepore: Terrorism Prevention Branch of the United Nations Office on Drugs and Crime (UNODC)
– Adam Hadley: Executive Director and Founder of Tech Against Terrorism
– Dr. Erin Saltman: Membership and Programme Director from the Global Internet Forum to Counter Terrorism (GIFCT)
Additional speakers:
– Natalia Gherman: Executive Director of the Counterterrorism Committee Executive Directorate (mentioned but did not speak)
Full session report
Countering Terrorist Use of Information and Communication Technologies: A Multi-Stakeholder Approach
This discussion brought together representatives from various organisations to address the critical issue of countering terrorist use of information and communication technologies (ICT). The speakers, representing the UN Counter-Terrorism Committee Executive Directorate (CTED), the Parliamentary Assembly of the Mediterranean (PAM), the UN Office on Drugs and Crime (UNODC), Tech Against Terrorism, and the Global Internet Forum to Counter Terrorism (GIFCT), shared their perspectives on current challenges, initiatives, and collaborative efforts in this domain.
Evolving Threat Landscape
The speakers unanimously highlighted the evolving nature of terrorist threats in the digital space. Adam Hadley from Tech Against Terrorism emphasised a paradigm shift in how we view terrorist use of the internet, framing it as a strategic rather than merely tactical tool. This perspective broadens the scope of the discussion, encompassing not only recruitment and radicalisation but also strategic communications and infrastructure concerns.
Jennifer Bramlette from CTED noted the increasing exploitation of social media, video games, and emerging technologies like artificial intelligence by terrorist groups. The speakers agreed that terrorists are becoming increasingly entrepreneurial and imaginative in their use of technologies, adapting their techniques to evade detection and removal from major platforms.
Challenges in Countering Terrorist Use of ICT
Several key challenges were identified during the discussion:
1. Varying technological capabilities: Jennifer Bramlette highlighted the stark contrast between member states with advanced technological capabilities and those struggling with basic infrastructure, such as providing electricity to police stations. Many states face challenges in incorporating ICT into their counterterrorism systems effectively.
2. Legal and regulatory gaps: Both Jennifer Bramlette and Pedro Roque emphasised the urgent need for updated counter-terrorism laws and regulatory frameworks. Bramlette pointed out that most states lack laws to deal with crimes committed through or by artificial intelligence, raising questions about how to “arrest a chatbot” or “prosecute an AI”.
3. Jurisdictional complexities: The speakers noted the challenges posed by the borderless nature of cyberspace, emphasising the need for cross-border consensus building and clearer international frameworks.
4. Content moderation complexities: Dr. Erin Saltman from GIFCT illustrated the difficulties in content moderation, using the example of distinguishing between a foreign terrorist fighter and “literally just a man in the back of a Toyota”.
5. Balancing security and human rights: The speakers stressed the importance of respecting human rights and fundamental freedoms while implementing counter-terrorism measures online.
Collaborative Initiatives and Approaches
The discussion underscored the critical importance of multi-stakeholder collaboration in addressing these challenges:
1. CTED’s inclusive approach: Jennifer Bramlette described CTED’s efforts to bring together member states, international organisations, the private sector, civil society, and academia. CTED is working on developing non-binding guiding principles for member states on countering terrorist use of ICT and maintains a Global Research Network to foster knowledge exchange.
2. PAM’s legislative efforts: Pedro Roque highlighted PAM’s commitment to fostering dialogue and cooperation towards the regulation of AI and emerging technologies. PAM has created the Permanent Global Parliamentary Observatory on AI and ICT and publishes daily and weekly digests on AI and emerging technologies to keep parliamentarians informed.
3. UNODC’s capacity-building initiatives: Arianna Lepore discussed UNODC’s Global Initiative on Handling Electronic Evidence, which supports criminal justice practitioners. UNODC plans to expand its Practical Guide on Handling Electronic Evidence to include FinTech providers and is developing customized guides for specific countries. Additionally, UNODC is working on updating model legislation on mutual legal assistance to include provisions on handling electronic evidence.
4. Tech Against Terrorism’s technological solutions: Adam Hadley described their Terrorist Content Analytics Platform (TCAP) for identifying and verifying terrorist content online. The organisation maintains a 24/7 capability to respond to major terrorist attacks and focuses on addressing terrorist-operated websites, including challenges related to domain names and hosting. Their threat intelligence team also provides hacking services and technical support to platforms.
5. GIFCT’s cross-platform solutions: Dr. Erin Saltman outlined GIFCT’s efforts to provide tech companies with tools and frameworks for countering terrorist content online. GIFCT maintains a hash-sharing database and an incident response framework. The organization has specific membership criteria and working groups, and plans to host regional workshops for knowledge exchange on local extremist trends. GIFCT also supports academic research through its Global Network on Extremism and Technology and a micro-grants program.
Emerging Technologies: Risks and Opportunities
The speakers discussed the dual nature of emerging technologies, particularly artificial intelligence:
1. Potential risks: Jennifer Bramlette noted that AI could exacerbate online harms and real-world damages.
2. Opportunities for counter-terrorism: Adam Hadley expressed hope that generative AI could improve the accuracy and volume of content moderation decisions.
3. Challenges in incident response: Dr. Erin Saltman raised concerns about the potential for AI-generated fake incident response content, emphasising the need for improved verification processes.
UNODC’s Work on the UN Convention Against Cybercrime
Arianna Lepore highlighted UNODC’s involvement in developing the new UN Convention Against Cybercrime, which aims to address the global challenges posed by cybercrime and provide a framework for international cooperation in this area.
Unresolved Issues and Future Directions
Several key issues remain unresolved and require further attention:
1. Regulation of terrorist-operated websites and domain names
2. Addressing jurisdictional complexities in cyberspace
3. Developing laws to deal with crimes committed through or by artificial intelligence
4. Balancing content moderation and free speech concerns
5. Verifying information during incident response in the age of AI-generated content
The speakers suggested potential compromises, such as using both list-based and behaviour-based approaches to identify terrorist content online, balancing technological solutions with human input and context for content moderation, and considering both risks and opportunities of emerging technologies in counter-terrorism efforts.
Conclusion
This discussion highlighted the complex and evolving nature of terrorist use of ICT and the need for a comprehensive, collaborative approach to address these challenges. The speakers emphasised the importance of public-private partnerships, international cooperation, and adaptive strategies to keep pace with technological advancements. As the digital landscape continues to evolve, ongoing dialogue, knowledge sharing, and collaborative efforts among diverse stakeholders will be crucial in effectively countering terrorist use of the internet and emerging technologies.
Session Transcript
Jennifer Bramlette: Just doing a mic check. Good afternoon. It’s working. Excellent. Yes. Mic check. Mic check. Everybody can hear. Excellent. Distinguished colleagues, good afternoon and welcome to all here in the room and joining us virtually for this IGF Open Forum on Countering the Use of Information and Communication Technologies, or ICT for Terrorist Purposes. I welcome you on behalf of the Executive Director of the Counterterrorism Committee, Executive Directorate, Assistant Secretary General Natalia Garibay. It is a great pleasure to hold CTED’s first session at an IGF event here in Riyadh. And it’s an honor to be here today with some of CTED’s close operational partners, the Parliamentary Assembly of the Mediterranean, or PAM, the Terrorism Prevention Branch of the United Nations Office on Crime, or UNODC, Tech Against Terrorism, joining us virtually, and the Global Internet Forum to Counter Terrorism, or GIFTT, also joining us virtually. I would like to begin this session by explaining the work of CTED. As a special political mission supporting the Security Council’s Counterterrorism Committee, CTED is mandated to conduct assessments of member states’ implementations of United Nations Security Council resolutions on counterterrorism on behalf of the Counterterrorism Committee. In this work, CTED identifies good practice and also gaps in implementation for which CTED works with partner organizations and states to facilitate technical assistance. CTED is additionally mandated to identify emerging trends and evolving terrorism threats, including through collaboration with the members of CTED’s global research network. Terrorist groups and their supporters continue to exploit the internet, social media, video games, and other online spaces, as well as emerging technologies, to engage in a wide range of terrorist-related activities. Developments in artificial intelligence and quantum technologies have the potential to exacerbate the risks for online harms and real-world damages. Yet, these valuable technologies offer immense benefits to society, and when used in a manner consistent with international law, they can be most useful tools for preventing and countering terrorism. When it comes to countering terrorism and violent extremism conducive to terrorism, the United Nations Security Council has developed a robust framework of resolutions and policy documents. The Council has adopted 16 counter-terrorism-related resolutions and five policy documents over the past 23 years that specifically address ICT and now emerging technologies. Through these, the Council has mandated CTED to work on a growing list of increasingly complex and technologically advanced issues relating to countering the use of ICT and other emerging technologies for terrorist purposes. As such, CTED has mainstreamed ICT-related issues, including now AI and other emerging technologies, into its workstreams. In our capacity to identify new trends and emerging threats, CTED draws attention to how exponential leaps in the development and applicability of digital tools and emerging technologies could enhance terrorist capabilities. CTED also identifies what legal, policy, and operational measures UN member states could implement and how they could use new technologies to increase the effectiveness of their counter-terrorism efforts. For example, the 2022 Delhi Declaration tasked CTED to develop non-binding guiding principles for member states to counter the use of Unmanned Aircraft Systems, or UAS, new financial technologies, and ICT for terrorist purposes. The Abu Dhabi Guiding Principles on threats posed by the use of Unmanned Aircraft Systems for terrorist purposes were adopted in December, 2023. The committee is currently negotiating the guiding principles on new financial technologies and will turn its attention to the ones for ICT. In carrying out its various activities, CTED holds two main principles at the forefront. Firstly, we draw particular attention to respect for human rights, fundamental freedoms, and the rule of law in the use of ICT and new technologies by states when countering terrorism. We also promote whole-of-society, whole-of-government, and gender-sensitive approaches as essential components for successful counterterrorism efforts. Secondly, we consistently emphasize the need for cooperation, collaboration, and partnerships. CTED follows an inclusive approach that brings together member states, international, sub-regional, and regional organizations, the private sector, civil society, and academia. This is an essential component of a multi-stakeholder digital environment. It is also necessary for member states to develop holistic, effective, and technologically advanced counterterrorism regimes. I will further detail CTED’s work on ICT in the technical panel, but now it is my great pleasure to welcome the Honorable Mr. Pedro Roque, the Vice President of the Parliamentary Assembly of the Mediterranean and one of our longstanding partners in the fight against terrorism to take the floor. Sir, I yield the floor to you.
Pedro Roque : No, you can hear me now. I think now it’s fine. Thank you so much. So, ladies and gentlemen, dear friends, it is an honour and a pleasure to address the opening of this event. PAM, the Parliamentary Assembly of the Mediterranean, values the most the fruitful cooperation with CTED, which resulted in the invitation to PAM in order to join the CTED Global Research Network, as well as a few other significant outcomes that I will mention during this intervention. I wish to thank also the colleagues of UNODC, the Global Internet Forum to Counter Terrorism, and the Tech Against Terrorism for all the work you do with AI and ICT. PAM is an international organisation which gathers 34 members of the European Parliament. PAM is an international organisation which gathers 34 member and associate national parliaments from the Euro-Mediterranean and Gulf regions. At present, PAM members are fully committed to fostering dialogue, cooperation and joint initiatives towards the regulation of AI and emerging technologies, thus supporting the efforts of the United Nations and the international community in this regard. If not properly regulated in a timely and effective way, The rapid advancement of AI and emerging technologies could severely harm democratic systems, disrupt societal structures, and pose significant risks to security and stability. Concrete actions and legislative frameworks for regulating AI and ICT should build on a multi-stakeholder collaboration while ensuring compliance with international human rights law and the protection of individuals’ fundamental freedoms. At the request of the UN Secretary General, PAM actively participated and contributed to the preparations of the UN Summit of the Future, held in New York last September. In conjunction with the summit, PAM also organized a high-level side event on parliamentary support in re-establishing trust and reputation in multilateral governance. This event was held in cooperation with the CTED and the permanent missions of Morocco and Italy to the UN and the Inter-Parliamentary Union. To achieve this objective, PAM parliaments committed to implementing the actions outlined in the Pact for the Future, particularly in its annex, the Global Digital Compact. This includes promoting a scientific understanding of AI and emerging technologies through evidence-based impact assessments, as well as evaluating their immediate and long-term risks and opportunities. Dear friends, through 2024, PAM experts, supported by our Center for Global Studies, CGS, and in partnership with CTED, devoted a major part of their work on monitoring, analyzing the developments of AI. and emerging technologies, as well as they abused by terrorists and criminal organizations. PAM-CGS has produced and recently released a report entitled The Malicious Use of AI and Emerging Technologies by Terrorists and Criminal Groups Impact on Security, Legislation and Governance. This comprehensive research project, drafted in partnership with CTED, not only benefited from first-hand insights by PAM member parliaments, but also went through a rigorous peer-review process conducted by several PAM strategic partners, including, among others, Amazon, Interpol, Média Duemila, a network of national international media organizations, NATO, the Policy Center for the New South and UNOCT. The main outcomes of the reports are, in first, the creation of the PAM Permanent Global Parliamentary Observatory on AI and ICT, designated as a platform to monitor, analyze, promote and advocate for effective legislation, principles and criteria. The observatory is located in the Republic of Samarino and is supported by PAM-CGS. In second, the publication of a daily and weekly digest, compiled from open sources, providing PAM parliaments and stakeholders with up-to-date news and analysis on trends in AI and emerging technologies. The digest covers key areas of interest, including governance, security, legislation, defense, intelligence and warfare. In conclusion, I would like to highlight two important resolutions that PAM parliaments adopted during the 18th PAM plenary session held in Braga, Portugal in May 2024. One resolution focused on digitalization, emphasizing the need to bridge the digital divide and promote equal access to digital technologies both across and within PAM countries. It also acknowledges the role of digital transformation in advancing the achievements of the UN Sustainable Development Goals. The second resolution addresses artificial intelligence, urging the allocation of resources to advance AI research and development, with an emphasis on fostering innovation while safeguarding human rights, fundamental freedoms, privacy protection and non-discrimination. PAM will further explore these issues at its 19th plenary session scheduled for February 2025 in Rome and during its new tenure as the Presidency of the Coordination Mechanism of Parliamentary Assemblies on Counterterrorism, including its political dialogue pillar. Additionally, I would like to inform you that PAM-CGS is currently working on two new reports. One focuses on the resilience of democratic systems in relation to the misuse of AI and new technologies and another, at the request of CTED, on the use of spyware and its legislative regulation. PAM will continue to collaborate with the United Nations, the Internet Governance Forum, its Member States and all stakeholders to shape a safer and more equitable digital world. I thank you for your attention.
Jennifer Bramlette: and context about the fight against terrorism and the malicious use of artificial intelligence, both from a cross-regional perspective and from the perspective of key government actors and partners, namely parliamentarians. And I don’t know if anybody in the room has been able to sit in on any of the parliamentarian track that’s happening way down at the end of the far corner, but the speakers there are phenomenal, the parliamentarians present are so engaged and it is essential to have all of government on board, including the elected officials. So as I, wearing the hat of the CTED executive director mentioned, I’d like to come back to the technical aspects of CTED’s ICT mandate as given by Security Council resolutions. And perhaps would somebody be kind enough to shut the door? Not that it’ll block the microphone from the other events that much, but that’s great. Thank you so much. So some of our mandates, I mean it’s a very widespread mandate that we have for ICT. The specifics of it include preventing the use of ICTs for terrorist purposes, including for recruitment and incitement to commit terrorist acts, as well as for the financing, planning, and preparation of their activities. We have mandate for countering terrorist narratives online and offline, gathering, processing, and sharing digital data and evidence, cyber security, but only in relation to the protection of critical infrastructure, and countering the financing of terrorism via new financial technologies and payment methods like crowdfunding. CTED is additionally looking at new trends and evolving threats in terrorist use of ICT to include threats and risks relating to advances in AI, the role of algorithmic amplification in promoting harmful and violent content, the misuse of video gaming platforms and related spaces, and risks associated with terrorist exploitation of dual-use technologies like 3D printing. and advanced robotics. As part of its work on human rights and fundamental freedoms, CTED addresses areas related to the programming behind AI and algorithm-based systems to ensure that it does not include bias, for example. We also look at privacy, data protection, and the lawful collection, handling, and sharing of data, and transparency and accountability for governments and the tech sector when it comes to content removal practices and data requests. Through its many assessment visits, CTED has noted that member states face a range of challenges when it comes to countering the use of ICT for terrorist purposes. Many of these stem from the sheer numbers and diversification of users across a multitude of decentralized online spaces and using a myriad of digital tools. So the rapid increase, availability, and technological capabilities of AI and other emerging technical tools, and of course the continued social, economic, and political drivers of violence, extremism, and terrorism. The three together make a perfect storm for terrorists being able to operate with sometimes seeming impunity with many of the challenges that member states are facing. And where some of these challenges really come to bear is how they’re incorporating ICT into their own counterterrorism systems, both in consideration of their existing resources and capabilities and in respect to compliance with their obligations under domestic and international human rights law. So for example, there are member states who are extremely technologically advanced who have no trouble bringing new tech in and onboarding it using virtual reality and alternate reality or augmented reality systems to test strategies, to work through contingency plans for training in the event a terrorist attack does happen, whereas other member states have trouble getting electricity to their police stations. So, as technology increases, this gap is widening. One of the biggest capacity gaps we note from our dialogue with member states is a shortage of tech talent and cutting-edge equipment in government entities. Issues of how to build that tech talent and then attract it into government positions and then retain it when the private sector and other avenues offer greater financial rewards are pressing questions, and there are no simple or inexpensive solutions. Another common shortfall observed in many states is that the criminal justice systems, especially in traditional criminal justice systems, they’re just not designed to address crimes committed in online spaces or through cyber means. So where you have countries who are still meeting in courtrooms without video cameras, without screens, without a capacity to handle electronic evidence or do video interviews, it’s almost impossible for them to prosecute crimes that are committed online where you are entirely reliant on the admission of electronic evidence and other digital tools and digital forensics to build a case and for a judge to try it effectively. Also, most states don’t even have on their books laws to deal with crimes committed through or by artificial intelligence. We’ve even been asked by authorities, like, how can we arrest a chatbot? How can we prosecute an AI? And those are really good questions, and there are no templated answers. Perhaps Ari can talk about if there are any plans for UNODC or any other entity to build a model law. There are also jurisdictional complexities in cyberspace. For example, gray area content could be illegal in one country, but not in the countries bordering it. And so like the examples outlined by Pam, many states are working together to build a cross-border consensus and to implement multilateral legal and operational frameworks to deal with these and many other ICT related challenges. CTED, the Counterterrorism Committee and the UN Security Council are also working through their international frameworks and multi-stakeholder processes to help states address these challenges. In developing the non-binding guiding principles for member states on ICT, CTED collaborated with over 100 partner agencies, including law enforcement and security services, legal and criminal justice sectors, capacity building entities, the private sector, technological companies, academia and civil society organizations to gather good practices and effective operational measures for ICT and emerging tech. Some of the areas addressed by the draft guiding principles include the conduct of regular risk and readiness assessments. This is something that has been identified as good practice but not nearly enough member states do it. They might do it once, they might not do it at all, but very few conduct regular risk and readiness assessments. And by readiness assessments, I mean a state looking at its own capacities, its own resources, and it’s a future look as to whether or not what it has ordered through its procurement processes, is going to be useful when it finally gets delivered at three years down the road. Other areas of the guiding principles include the need for updating counter-terrorism laws and regulatory frameworks, obviously. The development of guidelines for strategic communications and counter messaging algorithms. This is both for states and for the tech companies. The creation of content moderation and cross platform reporting mechanisms and recommendations for online investigations and how to more effectively and lawfully handle digital evidence. CTED cataloged these effective practices. and noted a number of other ones relating to safety by design, ethical programming, and the conduct of security and human rights impact assessments for AI and algorithm-driven systems. We also captured the positive impact already demonstrated by investment in digital and AI literacy programs for all levels of society. We further developed the guiding principles to ameliorate a range of concerns about the serious adverse effects on human rights that the use of new technologies by states without proper regulation, oversight, and accountability is having. I’d like to conclude by highlighting that many Security Council resolutions and the Delhi Declaration stress the importance of partnerships, in particular, public-private partnerships. CTED actively cooperates with Tech Against Terrorism, the Christchurch Call, and with the industry-led Global Internet Forum to Counter Terrorism, two of which are up next in our panel. And I’d like to now turn the floor to Ariana Lepore from the Terrorism Prevention Branch of the United Nations Office on Drugs and Crime, another close operating partner and dear friend, to discuss the work of the TPB on ICT and electronic evidence. Thank you.
Arianna Lepore: Thanks very much, Jennifer. Thanks for inviting us, UNODC, here. We have a long-standing partnership with CTED, as well with Pam and colleagues Irene and Adam. So it’s great to pick up from where you left it the importance of partnership. And we hope that also here in this forum we are able to establish contacts and continue our dialogue together. The work of UNODC blends naturally with the work of CTED in the sense that normally the case is that our colleagues in CTED inform our work in the sense that thanks to their assessment and thanks to the mandate that UNODC has, which is to provide technical assistance to member states in the fight against terrorism, UNODC, and in particular its terrorism prevention branch, where I belong, put together programs, projects, in order to support and build capacity of criminal justice officials in fighting terrorism. UNODC operates under Security Council resolutions, the 19 Conventions, the Secretary-General Action Plan on CVE, so we have a mandate which is very stringent, and since a few years now, we have been working very much, sparing no efforts on the issue of ICT, and as we go forward, we are expanding and delineating new strategy on how to deal with emerging technology. It was back seven years ago, in 2017, in the aftermath of the adoption of Resolution 2322 that was devoted and requested member states to increase the level of international cooperation, in particular in handling of electronic evidence, was back then that UNODC launched what we call the Global Initiative on Handling Electronic Evidence, which I coordinate. The Global Initiative on Handling Electronic Evidence was conceptualized with colleagues at CEDED and with the International Association of Prosecutors, and now it’s a flagship project of UNODC. The Terrorism Prevention Branch sits in Vienna, but UNODC has regional offices, country offices, including here in Saudi Arabia, and my colleague is the head of the office here, so we have the capacity to reach out at the ground level and create very close relationship with the practitioners we work with. So the Global Initiative was launched seven years ago. The purpose was exactly that. First of all, to foster public-private partnership, and it was thanks also to the efforts of CEDED and our efforts to work closely with the private sector that the initiative… is a fully-fledged project that has a holistic approach, so involves the private sector, involves the experts, involves the practitioners, the academia, and we developed different streams of work. The goal is to support law enforcement, prosecutors, judges, central authorities, competent authority for international cooperation in the preservation and production of electronic evidence for criminal cases. How did we do that? Through the development of tools, which is our bread and butter, including the development of model legislations, of course. Now, I’d like to focus our attention to which is the one that is the main tool of this global initiative, which is the Practical Guide on Handling Electronic Evidence. It has been an extensive work done with colleagues at CEDED, with colleagues that represented the tech industry, and it’s a guide, technically a guide, a manual, that step-by-step informs criminal justice practitioners on how to request for preservation of electronic evidence, emergency disclosure, voluntary disclosure, and where, depending on the data that is requested, direct requests are not possible, how to begin a formal mutual legal assistance process. It contains a mapping of now more than 100 service providers. At the moment, ICT service providers. Nevertheless, just last week in Vienna, we conducted the very first Esper Group meeting in order to include the FinTech providers, and so to create the link between also electronic evidence, but also financial electronic evidence, because we heard from practitioners that this is more and more an emerging need to connect the two. So very soon, we will have an annex to the guide that will also contain a mapping of VASPs, FinTech providers, and how to approach them, request for preservation. preservation, disclosure, and so forth, and all the procedures that entails. The guide also contains model forms on how to request the private sector those informations because back then, we were hearing the, and see that in particular, the complaints of the criminal justice officials, they would send requests to the private sector, requests were never answered, but then we spoke with the private sector, and they say the type of requests they would receive were impossible to be answered, tetra and tetra byte of material, 10 years of evidence being requested, impossible. So we tried to seed them all around the table, and we developed forms which contains diligently all the elements that would enable the private sector, the providers to respond. So the guide is the main tool around which all the capacity building support that UNODC offers is constructed. Now, the guide is global in nature, but more and more advancing in our program, we have customized the guide, tailored it to specific member states that they have requested. So we have a customized guide for Pakistan, for India, for the Maldives, and we keep counting. So member states will come to us, and then we will do a thorough research on the procedural law, the legislation, and then instead of quoting worldwide legislation, we would design a guide specific for that country. And this is one of the priority of UNODC, to make our work sustainable, we also develop train-the-trainer modules on the practical guide, so that we could embed this guide within training institutes so that the transmission of knowledge is up and running. The issue of the model legislation that was mentioned by Jennifer is fundamental. UNODC does that in the context of its work in all crime types, but in specific, it was in 2021, we have updated the UNODC model legislation on mutual legal assistance, which now contains provision on handling, receiving, and transmitting electronic evidence. So when countries are up to updating their MLA law, they can go. to us, request assistance and see what type of provisions we have put together. All of these, all those tools are available on our platform. We have created an electronic evidence hub, but we have not stopped there. There are two last points which I’d like to make and the first one is that, as Jennifer will say and colleagues would probably also, introduces the fact that technology is advancing. Jennifer mentioned some of the challenges that we will face. We are already facing them, artificial intelligence and all those emerging technologies. So UNODC is also expanding a strategy, how to go about this. So to counter the misuse of technology, but also to utilize technology to counter terrorists. So there is this dual challenge that UNODC will try to address and you will hear more about our interventions. And last but not least, a word on the new convention on cybercrime. It’s known to everyone in the next few days, possibly for sure the text of the United Nations Convention Against Cybercrime will be adopted by the UN General Assembly. Now it’s a convention on cybercrime, nevertheless there is an important segment in it, in the draft as it is now, that speaks about electronic evidence. So obviously that will also inform the work that we are doing and we will monitor closely how the adoption goes and what will be then the next steps, when it will be ratified, the protocols and so forth. So Jennifer, I would stop here and I thank you for this opportunity.
Jennifer Bramlette: Thank you very much, Arianna. I would now like to turn the floor to… Mr. Adam Hadley, CBE, who is the Executive Director and Founder of Tech Against Terrorism. Adam, the floor is yours.
Adam Hadley : Jennifer, thank you very much. Can you hear me well from there? Yep, great. Wonderful. Well, thank you very much for having us today to present about the work of Tech Against Terrorism and some of our concerns at the IGF. We certainly consider the IGF to be a vital forum to discuss important matters such as the terrorist use of the internet. I’d like to frame our discussion around a paradigm shift in how we view the terrorist use of the internet. Historically, the terrorist use of the internet has been seen as a tactical tool for recruitment and radicalization, but increasingly our concern is that the internet is becoming a strategic battleground for terrorists and hostile nation states, but mainly for terrorists. So as well as sharing three critical challenges that we see at Tech Against Terrorism, I’ll outline one positive potential for generative AI, and then suggest a need to focus on countering the terrorist use of the internet infrastructure, in particular terrorist-operated websites. So who are we at Tech Against Terrorism? What’s our mission? Well, our mission is to save lives by disrupting the terrorist use of the internet, and we’re proud to have been established by UNCTED way back in 2017 as a public-private partnership focused on bridging the divide between the private sector and the public sector. Accordingly, we’ve been recognized by a number of Security Council resolutions, and as Jennifer mentioned, the Delhi Declaration. Most recently, we’ve been referenced in a Security Council resolution 2713, encouraging Tech Against Terrorism to support government of Somalia in countering the use of the internet by Al-Shabaab. We were established, as I said, to improve connections between companies and governments. We’re a small, independent NGO based in London and we work across the entire digital ecosystem. Effectively, we aim to understand where the terrorists are using the internet and what practically can be done about this. We’re global in approach and we have 24-7 hour coverage. I’d also like to recognise the great efforts by many other organisations, of course, as well as UNC TED, there’s the EU Internet Forum, there’s the Christchurch Call to Action, there’s our partner initiative called Tech Against Terrorism Europe, TATE, funded by the EU, there’s the Extremism and Gaming Research Network, Institute of Strategic Dialogue and, of course, the GIFT-CT. I’m delighted that Erin is able to join us from the GIFT-CT in a few moments. At Tech Against Terrorism, we focus on the most egregious examples of terrorist use of the internet. It’s a very important thing to stress that we predominantly focus on those terrorist organisations that have been designated by the UN, the US, the EU and other international bodies. This doesn’t mean that we don’t focus on the broader range of activity that terrorists conduct online, but rather that we believe it’s important to focus where there’s consensus, recognising that it is in that focus where we will be able to have the most impact. In terms of the teams at Tech Against Terrorism, we have our own threat intelligence team, Open Source Intelligence, we work with governments and platforms to build capacity, and we also develop technology to speed up the ability of our analysts and others to detect the terrorist use of the internet. In doing all of this, we aim to share resources cost-free in a collaborative way, and we have a number of resources that are available to platforms and governments, such as the knowledge sharing platform. We also provide hacking services and other technical support services to platforms, including a trusted flagger portal. Now, in terms of the current landscape, What we’d argue is that currently we’re seeing some of the most egregious examples of terrorist-suited internet in the last decade. Of course, platforms and governments face many threats. The geopolitical instability now is the highest it has been for many decades. And therefore, understandably, platforms and governments have many concerns to focus on. But what is certain is that counterterrorism is no longer the primary concern of many of these stakeholders. And arguably, it should be. Since October 2023, we’ve seen terrorist content online reach unprecedented levels. From terrorist organisations such as the Islamic State, al-Qaeda, also the Houthis, Hamas, Hezbollah, al-Shabaab. Quite frankly, the terrorist use of the internet now is at such high levels that we’re really not sure what to do about it alone. And therefore, we call for improved action from the tech sector, from governments and from others, to ensure that the correct and appropriate level of resources are being brought to bear to tackle this. In our view, this threat is manifest, of course, offline more than anything else. We know that terrorist groups are regrouping. We know that attacks in Africa are very high. We know the risks from coming from Central Asia with regards to ISKP. And their use of the internet is commensurate with this increased threat. The question is what to do about it. So at Tech Against Terrorism, we have some technology, mainly the terrorist content analytics platform, the TCAP, which seeks to identify and verify terrorist content online. But we can’t do this on our own, which is why we commend the continued efforts of the GIFCT to share its resources, capabilities and know-how with the broader community. It’s great that there are industry-led initiatives like the GIFCT investing so much in this space. And we encourage the GIFCT to continue to do this in the future. and for the GIFT-CT to be continued to be funded by the tech sector. At Tech Against Terrorism, we currently alert more than 140 platforms and we work with a range of stakeholders, governments and tech companies and that’s how we’re funded, in quite an independent and transparent fashion. So we see there are three key challenges. The first, as alluded to by Jennifer at UNCTED just now, is around strategic communications. Historically, the terrorist use of the internet has been considered in quite a tactical way. What this means is that the terrorist use of the internet has been considered purely in terms of radicalisation and countering this, but we’d also argue that terrorist groups use the internet for strategic communications purposes and most terrorist organisations are looking to have a political effect. They’re looking to promote their domestic popularity or to project international standing and therefore we think it’s paramount to ensure that the way we counter the terrorist use of the internet doesn’t just think about radicalisation and recruitment and incitement but also the political value of that speech. If terrorists are able to share their messages on social media, messaging out on their own websites, this is worth a lot to them strategically and therefore in the context of hybrid warfare, countering terrorist strategic communications is of vital importance. The second challenge is around infrastructure. Quite rightly we talk about the tech sector and the tech sector has done an enormous amount over the years as supported by the GIFT-CT, but we mustn’t forget other sources of terrorist activity online. Terrorists now can create their own websites, their own apps, their own technologies. This presents a number of jurisdictional challenges, in particular at the governance level of the internet. Can terrorists and should terrorists be allowed to run their own websites? Should ISIS or Al-Qaeda have the right to buy their own domain name? If not, what should we do about it? Unfortunately, this is not a theoretical issue. We are seeing hundreds of these websites being set up and often it’s extremely difficult working with internet providers because of ambiguities around jurisdiction. What we are finding is that terrorists are increasingly entrepreneurial and imaginative in how they use technologies. In many cases, they’re also going back onto the major platforms and are proving quite difficult to dislodge in a number of ways as they adapt their techniques. They potentially hide their content and become better at evading automated responses. This is not a criticism of the tech sector at all. I’m merely highlighting the formidable challenge that platforms have in keeping ahead of an extremely sophisticated adversary. But the infrastructure is something that I wanted to bring to the attention of the IGF because surely more needs to be done to establish international frameworks where we have designated terrorist organisations buying domain names and buying hosting for their websites. The third challenge is about detection and analysis of terrorist content. There is a very large amount of terrorist content online. It somewhat paradoxically is hardest to analyse this on large platforms. The reason being, for data privacy reasons and other perfectly reasonable explanations, very large platforms are not easy to analyse at scale. What this means is that analysis of small platforms is easier. Analysis of larger platforms is more difficult. Therefore, we ask for improvements in data access, but we recognise some of the challenges in terms of data privacy where that’s concerned. We commend platforms for doing what they can. to share more about their activities in very often comprehensive transparency reports. So moving to the end of my intervention here, I certainly recognise the expert opinion that’s being shared about the risks associated with AI and generative AI. We would argue, however, that generative AI also provides a significant opportunity to improve the accuracy and volume of content moderation decisions online to ensure that terrorist content can be detected at scale accurately. The accuracy is very important because in everything we do at Tech Against Terrorism and UNCTED, and I believe the GIF-CT, of course we have to counter the terrorist use of the internet, but we have to ensure that fundamental freedoms and human rights are upheld. I remain hopeful that generative AI will provide capability to ensure more accurate content moderation decisions can be made and certainly encourage improved investment in generative AI to detect obvious examples of content emanating from designated terrorist organisations. So looking towards 2025, underlying threats are increasing. They’re increasing internationally and domestically. Internationally we have IS, we have al-Qaeda, we have al-Shabaab, we have many other terrorist organisations committing acts of violence in person, offline. We’re also seeing in a number of countries increased youth involvement in terrorist activities for reasons not fully understood, and we’re seeing terrorists get better at exploiting grievances regarding geopolitical instability and state failure, and the role of the internet is only becoming more and more important in this. But yet, geopolitically, there is a risk that consensus about jurisdiction, where the internet is concerned, is going to reduce over time. There is a very real risk that the very time we need increased consensus globally about internet governance, that this may be more difficult to achieve because of geopolitical tensions. Our work at Tech Against Terrorism will continue. We’re a small NGO of around 10 people. We are hoping that our 24-7-hour capability will help in responding to major terrorist attacks. We’ll be launching our TrustSMART and a number of other services in support of the tech sector and governments. Concluding my remarks, I would like to emphasise that it’s also important to talk about the infrastructure there, and in particular terrorist-operated websites. How can it be right for designated terrorist organisations to have the right to create top-level domain names? In fighting this, we would ask for improved clarity about jurisdiction and standardisation of responses. We commend the Somali government for doing such good work in this space and would encourage others to follow the model that the Somali government is doing in taking down content and activity by al-Shabaab. So the Internet’s role in global security has never been more critical. As we face the challenges of the next year, we believe that responding to the terrorist use of the Internet will be vital to ensure global stability. The question is not whether we can stop terrorists using the Internet, but what we can do together in a collaborative way, upholding fundamental freedoms to push back against terrorist content and activity online. Thank you very much for your attention to these critical matters. And I will yield the floor to UNCTED. Thank you very much.
Jennifer Bramlette: Adam, thank you very much. As with the intervention from TPB, I’m not even going to try to summarise what you said. And in the interest of time, I want to make sure that Dr. Saltman has a full measure to talk about the work of GIF-CT. So Dr. Saltman, the membership and programme director from the Global Internet Forum to Counter Terrorism, you now have the floor.
Dr. Erin Saltman: Thank you so much. And it’s always a pleasure to go last, to not have to repeat any of the wonderful and very timely points that my colleagues have made. But many thanks to UNCTED as well as the IGF for hosting a session on this topic and for allowing us to dial in virtually for those of us that couldn’t attend in person. We have a bit of FOMO. We wish we were in the room with you. I want to talk a little bit about what GiveCT does, who we are for those that don’t know us very well, and try to leave some room for questions too. If you don’t know about the Global Internet Forum to Counter Terrorism, it was mentioned we’re a little bit of a unique NGO. We are a non-profit, but we were in fact founded by tech companies to help tech companies counter terrorism and violent extremism, but with multi-stakeholderism built into our governance and our programmatic efforts. Just like terrorism has always been a transnational effort, it is also a very cross-platform effort, and I’ll bet very few people in the room have just one app on their phone, so we should be educating ourselves and looking at normative behaviors online and realizing that bad actors, terrorists, and violent extremists are also very cross-platform, as Adam mentioned, in many of their efforts. With that, we realized we needed a safe space for tech to work together. Our efforts are broken down into roughly four buckets. One really is cross-platform tech solutions. I’ll speak briefly to that. One is about incident response, where increasingly there are offline, real-world attacks and events taking place where the perpetrators and accomplices are using online aspects or assets to further the harm of their terrorism. We also want to further research and knowledge sharing, as well as information exchange and capacity building, and that includes work with governments and civil society so that knowledge exchange is really holistic, because the signals that a tech company is seeing are distinctly different to how law enforcement might be approaching it or, on the ground, how civil society is experiencing it. Because we share such sensitive information and provide a platform for information sharing around such time-insensitive issues, we do have a membership criteria, which is also a little bit unique. You can’t just come in the door and work with us. You have to meet a threshold, and this was built out in consultation with our independent advisory committee that includes UNC-TED, among other government and non-governmental officials and experts. and this includes making sure that tech companies that work with us have things like an annual transparency report, have a public commitment to UNDP-related and guiding principles around human rights, make sure that they have clear terms of service, make sure that they have the ability to report something like terrorist content, and we take for granted on social media largely that you can report and flag content, but obviously in other platforms like terrorist-operated websites, perhaps to Adam’s point, or certain gameplay spaces, it might not be intuitive how you would flag to a platform or to authorities a terrorist or violent extremist signal that you’re seeing. So once you become a member of GIF-CT, things around cross-platform tech solutions do include a scaled hash-sharing database where GIF-CT and our member companies can ingest hashed content of terrorist and violent extremist material when it fits our criteria. And there were some questions in the chat here around defining terrorism, which again a million PhDs and a million more are needed on this topic. There is not conclusive agreement, but when we talk to tech companies that were members, and we have 30-plus member companies, which include your largest ones like your Microsofts, your Amazons, your Metas, your Googles, but also smaller or medium-sized companies like JustPasted or Discord and Twitch or Zoom, companies that never thought they’d have to come to the table and talk about terrorism as a topic until they realized exploitation was happening. And so when we talk about hashing terrorist content, we began with a list-based approach. Companies do have consensus where they look to the UN designation lists around terrorism and look at terrorist individuals and groups and can find common ground there. But we realized very quickly, and in consultation with human rights experts and civil society organizations, there is an Islamist-extremist bias in most lists in a post-9-11 framework. and we wanted to get at some of the neo-Nazi and white supremacy attacks that we know are taking place in different parts of the world. And so we also started building in behavior-based buckets. When you look at online content, a list doesn’t always cut it. It’s not always clear a group affiliation or a card-carrying membership for how terrorism, and especially lone actor terrorism, takes place. So our behavior-based buckets include things like hashing attacker manifestos, where that content in and of itself is justifying a terrorist attack, or things like branded terrorist and violent extremist content that gets not only at some of these Islamist publications, but at some of these white supremacy and neo-Nazi related and otherwise other forms of violent extremist publications online. And every year we have an incident response and a hash-sharing form of working group that is multi-stakeholder to constantly evaluate and say, where can we go further? Should we expand this taxonomy? If we expand inclusion, would that impede on free speech and other human rights concerns? And so this is an iterative and evolving process over time. Hashing content also evolves in form. When we think content, we often think image or video, but in fact, at a managed t-cap, that’s for flagging urls. Or when we see a terrorist attacker manifesto, that’s usually in pdf form. So the forms of content that can be hashed have also had to evolve over time. On top of this cross-platform tooling, incident response, particularly as a critical point after the Christchurch attacks in New Zealand, meant that tech companies really wanted to work together to stop the viral spread or the perpetrator-related content in and around an attack. Not every single event will have a live stream, but we have seen since the Christchurch event that there are a number of lone actor or otherwise planned attacks that do have these online aspects at play, such as a live stream, such as the publishing of a manifesto, or even in Halle, Germany in 2019, the pdf publishing of a how to 3d print a gun so again these are all assets in and around and we want to be able to hash and share that and so our incident response framework allows us to increase knowledge sharing make communication with affected governments and law enforcement where appropriate and share communication and verified information we’ve mentioned generative ai in the last few comments and it’s also a concern of what might happen when you start getting fake incident response content in and around something that might or might not have even happened how do we quickly verify and share information to stop viral spread of misinformed or actually misleading incident content and so this sort of verification process will be key to future incident response efforts when we think of adaptation this is where knowledge exchange and active learning and training and capacity building between sectors is really critical we do fund an academic wing of our work the global network on extremism and technology and while this is accessible to everyone the insights coming out of there are allowed to support with micro grants academics and experts around the world that have their finger on the pulse of extremist trends this could be anything from again ai generated content an entire insight series on that 3d printing and some of the concerns about how that is assisting and aiding terrorism and violent extremism gaming and gaming adjacent platforms what those signals look like when it is in fact the modification of characters whether it’s just should or you have a policy that allows you to name a player adolf hitler or not these are things that tech companies are asking and looking for policy guidance around across these sectors and so when it comes to knowledge exchange the smallest little trends being shared can really have an amplifier effect for tech companies to understand what harm and threat might look like on their platforms along with this we really want to understand different parts of the world and how violent extremism and terrorism is manifesting There are some very broad stroke global trends, but when we look at how extremists and terrorists use coded language, this is very colloquial specific when we see how memes and icons and imagery is used to evade detection, this is very local context specific. So on top of the technology, which helps us get to scale and speed, we really do need the context that sits around what you might surface and see as a moderator. Even a standard agreed upon entity like Islamic State, if I were to have you surface an image and it’s a guy in the back of a Toyota, it’s really hard to know if that is foreign terrorist fighter imagery or if that is literally just a man in the back of a Toyota. And the same goes with a lot of different forms of violent extremist trends. And so when we sit alongside the technological solutions, we will still need that human input, we will still need that cross-sector knowledge sharing. We’ve been very grateful, even in our own fundamental advancing of how we think of what terrorist content means and looks like, having CTED and others at the table to consult with and ensure we’re always communicating what we’re trying to aim for and how we don’t overstep in counterterrorism efforts to abuse other forms of human rights, including freedom of expression. We’ve also ensured that we have to be on the ground. Not everything can be done over Zoom. Fortunately or unfortunately, we do host workshops in different parts of the world, and we have made sure that we are working with ground-based partners and governments in order to have nuanced dialogues, not just imparting the knowledge we have about trends online, but gaining valuable feedback on what these trends look like in specific regions. Earlier this year, we hosted workshops in Brazil for Latin America, as well as most recently in Sweden at the Nordic Democracy Forum. And next year, we’ll be working with the IIJ in Malta to convene around sub-Saharan Africa. And so if anyone wants to follow up and work with us and see where we can come and bring a two-way knowledge exchange, and make sure that the lessons are learned on both sides. I’d really love to further that as we go. And lastly, each group, we pick three to five questions that we know no one government or tech company can answer on their own around topics of counterterrorism and we form working groups. And this means that people apply to join a working group. They meet a few times in a year and we fund the development of outputs that create best practices, that evolve our own incident response, that evolve frameworks for understanding terrorist content. In the last couple of years, we’ve had things around our own hash sharing taxonomy as mentioned, but also things like red teaming around looking at the harms of AI generative content, but also blue teaming, looking at the positives around positive interventions and how this amazing new technology can help with intervention work, counter narratives, redirecting, getting at translation in language areas that a lot of moderators are blind to. And so there are risks and opportunities as we advance this conversation. With that, I would love to open it up to more questions. There’s so many rabbit holes, both technically, both philosophically, existentially, when we think of how to advance countering terrorism and violent extremism, but it is only through these multi-stakeholder collaborative efforts that we can really get at the 360 degree threat and opportunity and where to take the next steps. And with that, I yield back to Jennifer and UNC Ted. Thank you.
Jennifer Bramlette: Thank you very much, Erin. I really appreciate it. Every time I sit with you and with Adam, I learn something. We genuinely appreciate the time that you’ve taken to be here with us today. And I’d like to thank everybody who’s here in the room today as well. I know there are many other opportunities for things to do, and apparently at six o’clock, everything closes. So unfortunately, I will not be able to open this floor up for questions, but I think some of us will be willing to stand out in the hallway and chat out there to answer any questions that you may have. And in closing, I do wish to thank you for being here and choosing to spend the last hour of a very busy day with us. It was an honor to have all of the speakers here, and really, the final word on partnerships, it’s really through our partnerships and our collaborations, through leveraging our shared knowledge and our lessons learned and our good practices, that we shall be able to proactively overcome these challenges. And CTED will continue to pursue our work and assist with the work of our partners as we move forward with the IGF and counter more terrorism. Thank you very much for being here today.
Jennifer Bramlette
Speech speed
140 words per minute
Speech length
2409 words
Speech time
1028 seconds
CTED’s mandate to assess member states’ implementation of UN resolutions on counterterrorism
Explanation
CTED is mandated to conduct assessments of member states’ implementations of UN Security Council resolutions on counterterrorism. This work involves identifying good practices and gaps in implementation, and facilitating technical assistance.
Evidence
CTED identifies good practice and also gaps in implementation for which CTED works with partner organizations and states to facilitate technical assistance.
Major Discussion Point
Countering Terrorist Use of Information and Communication Technologies (ICT)
Member states’ varying technological capabilities and resources for counterterrorism
Explanation
Member states face different challenges in countering terrorist use of ICT due to varying technological capabilities and resources. Some states are technologically advanced, while others struggle with basic infrastructure.
Evidence
There are member states who are extremely technologically advanced who have no trouble bringing new tech in and onboarding it using virtual reality and alternate reality or augmented reality systems to test strategies, to work through contingency plans for training in the event a terrorist attack does happen, whereas other member states have trouble getting electricity to their police stations.
Major Discussion Point
Challenges in Addressing Terrorist Use of ICT
Need for updated counter-terrorism laws and regulatory frameworks
Explanation
There is a need to update counter-terrorism laws and regulatory frameworks to address crimes committed in online spaces or through cyber means. Many states lack laws to deal with crimes committed through or by artificial intelligence.
Evidence
Most states don’t even have on their books laws to deal with crimes committed through or by artificial intelligence. We’ve even been asked by authorities, like, how can we arrest a chatbot? How can we prosecute an AI?
Major Discussion Point
Challenges in Addressing Terrorist Use of ICT
Potential of AI and quantum technologies to exacerbate online harms and real-world damages
Explanation
Developments in artificial intelligence and quantum technologies have the potential to increase risks for online harms and real-world damages. However, these technologies can also be valuable tools for preventing and countering terrorism when used in accordance with international law.
Evidence
Developments in artificial intelligence and quantum technologies have the potential to exacerbate the risks for online harms and real-world damages. Yet, these valuable technologies offer immense benefits to society, and when used in a manner consistent with international law, they can be most useful tools for preventing and countering terrorism.
Major Discussion Point
Emerging Technologies and Their Impact on Counterterrorism
CTED’s inclusive approach involving member states, organizations, private sector, civil society, and academia
Explanation
CTED follows an inclusive approach that brings together various stakeholders to develop holistic, effective, and technologically advanced counterterrorism regimes. This multi-stakeholder approach is essential in the digital environment.
Evidence
CTED follows an inclusive approach that brings together member states, international, sub-regional, and regional organizations, the private sector, civil society, and academia. This is an essential component of a multi-stakeholder digital environment.
Major Discussion Point
Importance of Multi-stakeholder Collaboration
Shortage of tech talent and cutting-edge equipment in government entities
Explanation
One of the biggest capacity gaps noted is a shortage of tech talent and cutting-edge equipment in government entities. This presents challenges in attracting and retaining tech talent in government positions.
Evidence
One of the biggest capacity gaps we note from our dialogue with member states is a shortage of tech talent and cutting-edge equipment in government entities. Issues of how to build that tech talent and then attract it into government positions and then retain it when the private sector and other avenues offer greater financial rewards are pressing questions, and there are no simple or inexpensive solutions.
Major Discussion Point
Challenges in Addressing Terrorist Use of ICT
Pedro Roque
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Parliamentary Assembly of the Mediterranean’s efforts to regulate AI and emerging technologies
Explanation
The Parliamentary Assembly of the Mediterranean (PAM) is committed to fostering dialogue, cooperation, and joint initiatives towards the regulation of AI and emerging technologies. PAM supports the efforts of the UN and the international community in this regard.
Evidence
PAM members are fully committed to fostering dialogue, cooperation and joint initiatives towards the regulation of AI and emerging technologies, thus supporting the efforts of the United Nations and the international community in this regard.
Major Discussion Point
Countering Terrorist Use of Information and Communication Technologies (ICT)
Need for scientific understanding and impact assessments of AI and emerging technologies
Explanation
PAM parliaments are committed to promoting a scientific understanding of AI and emerging technologies through evidence-based impact assessments. This includes evaluating immediate and long-term risks and opportunities of these technologies.
Evidence
This includes promoting a scientific understanding of AI and emerging technologies through evidence-based impact assessments, as well as evaluating their immediate and long-term risks and opportunities.
Major Discussion Point
Emerging Technologies and Their Impact on Counterterrorism
Jurisdictional complexities in cyberspace and cross-border consensus building
Explanation
There are jurisdictional complexities in cyberspace, such as content that may be illegal in one country but not in neighboring countries. Many states are working together to build cross-border consensus and implement multilateral legal and operational frameworks to address these challenges.
Evidence
For example, gray area content could be illegal in one country, but not in the countries bordering it. And so like the examples outlined by Pam, many states are working together to build a cross-border consensus and to implement multilateral legal and operational frameworks to deal with these and many other ICT related challenges.
Major Discussion Point
Challenges in Addressing Terrorist Use of ICT
PAM’s collaboration with UN and other stakeholders to shape a safer digital world
Explanation
PAM is committed to collaborating with the United Nations, the Internet Governance Forum, Member States, and all stakeholders to shape a safer and more equitable digital world. This includes ongoing work on reports and initiatives related to AI and new technologies.
Evidence
PAM will continue to collaborate with the United Nations, the Internet Governance Forum, its Member States and all stakeholders to shape a safer and more equitable digital world.
Major Discussion Point
Importance of Multi-stakeholder Collaboration
Arianna Lepore
Speech speed
141 words per minute
Speech length
1266 words
Speech time
537 seconds
UNODC’s Global Initiative on Handling Electronic Evidence to support criminal justice practitioners
Explanation
UNODC launched the Global Initiative on Handling Electronic Evidence to support law enforcement, prosecutors, judges, and other authorities in handling electronic evidence for criminal cases. The initiative includes the development of tools and guides to assist practitioners in this area.
Evidence
The Global Initiative on Handling Electronic Evidence was launched seven years ago. The purpose was exactly that. First of all, to foster public-private partnership, and it was thanks also to the efforts of CEDED and our efforts to work closely with the private sector that the initiative… is a fully-fledged project that has a holistic approach, so involves the private sector, involves the experts, involves the practitioners, the academia, and we developed different streams of work.
Major Discussion Point
Countering Terrorist Use of Information and Communication Technologies (ICT)
UNODC’s partnership with CTED and other organizations in developing tools and guides
Explanation
UNODC collaborates closely with CTED and other organizations in developing tools and guides for handling electronic evidence. This partnership ensures that the work of UNODC is informed by assessments and mandates from other UN bodies.
Evidence
The work of UNODC blends naturally with the work of CTED in the sense that normally the case is that our colleagues in CTED inform our work in the sense that thanks to their assessment and thanks to the mandate that UNODC has, which is to provide technical assistance to member states in the fight against terrorism, UNODC, and in particular its terrorism prevention branch, where I belong, put together programs, projects, in order to support and build capacity of criminal justice officials in fighting terrorism.
Major Discussion Point
Importance of Multi-stakeholder Collaboration
Adam Hadley
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Tech Against Terrorism’s mission to disrupt terrorist use of the internet through public-private partnerships
Explanation
Tech Against Terrorism aims to save lives by disrupting terrorist use of the internet. The organization was established as a public-private partnership to bridge the divide between the private sector and the public sector in countering terrorist use of ICT.
Evidence
Well, our mission is to save lives by disrupting the terrorist use of the internet, and we’re proud to have been established by UNCTED way back in 2017 as a public-private partnership focused on bridging the divide between the private sector and the public sector.
Major Discussion Point
Countering Terrorist Use of Information and Communication Technologies (ICT)
Increasing entrepreneurial and imaginative use of technologies by terrorists
Explanation
Terrorists are becoming increasingly entrepreneurial and imaginative in their use of technologies. They are adapting their techniques to evade automated responses and are proving difficult to dislodge from major platforms.
Evidence
What we are finding is that terrorists are increasingly entrepreneurial and imaginative in how they use technologies. In many cases, they’re also going back onto the major platforms and are proving quite difficult to dislodge in a number of ways as they adapt their techniques.
Major Discussion Point
Challenges in Addressing Terrorist Use of ICT
Opportunities for generative AI to improve accuracy and volume of content moderation decisions
Explanation
While there are risks associated with AI and generative AI, there are also significant opportunities to improve the accuracy and volume of content moderation decisions. This could help detect terrorist content at scale more accurately while upholding fundamental freedoms and human rights.
Evidence
I remain hopeful that generative AI will provide capability to ensure more accurate content moderation decisions can be made and certainly encourage improved investment in generative AI to detect obvious examples of content emanating from designated terrorist organisations.
Major Discussion Point
Emerging Technologies and Their Impact on Counterterrorism
Tech Against Terrorism’s work with governments and platforms to build capacity
Explanation
Tech Against Terrorism works with governments and platforms to build capacity in countering terrorist use of the internet. They provide various resources and services to support this effort, including threat intelligence, capacity building, and technology development.
Evidence
At Tech Against Terrorism, we have some technology, mainly the terrorist content analytics platform, the TCAP, which seeks to identify and verify terrorist content online. But we can’t do this on our own, which is why we commend the continued efforts of the GIFCT to share its resources, capabilities and know-how with the broader community.
Major Discussion Point
Importance of Multi-stakeholder Collaboration
Dr. Erin Saltman
Speech speed
170 words per minute
Speech length
2065 words
Speech time
725 seconds
GIFCT’s cross-platform tech solutions and incident response framework for countering terrorist content online
Explanation
The Global Internet Forum to Counter Terrorism (GIFCT) provides cross-platform tech solutions and an incident response framework to counter terrorist content online. This includes a hash-sharing database and collaborative efforts to stop the viral spread of terrorist content during incidents.
Evidence
Once you become a member of GIF-CT, things around cross-platform tech solutions do include a scaled hash-sharing database where GIF-CT and our member companies can ingest hashed content of terrorist and violent extremist material when it fits our criteria.
Major Discussion Point
Countering Terrorist Use of Information and Communication Technologies (ICT)
Challenges and opportunities presented by AI-generated content in incident response efforts
Explanation
AI-generated content presents both challenges and opportunities in incident response efforts. There are concerns about fake incident response content, but also potential for AI to assist in verification processes and positive interventions.
Evidence
We’ve mentioned generative ai in the last few comments and it’s also a concern of what might happen when you start getting fake incident response content in and around something that might or might not have even happened how do we quickly verify and share information to stop viral spread of misinformed or actually misleading incident content and so this sort of verification process will be key to future incident response efforts
Major Discussion Point
Emerging Technologies and Their Impact on Counterterrorism
GIFCT’s multi-stakeholder governance and programmatic efforts
Explanation
GIFCT emphasizes multi-stakeholder governance and programmatic efforts in countering terrorist use of the internet. This includes collaboration with governments, civil society, and tech companies to share knowledge and develop best practices.
Evidence
We’ve been very grateful, even in our own fundamental advancing of how we think of what terrorist content means and looks like, having CTED and others at the table to consult with and ensure we’re always communicating what we’re trying to aim for and how we don’t overstep in counterterrorism efforts to abuse other forms of human rights, including freedom of expression.
Major Discussion Point
Importance of Multi-stakeholder Collaboration
Agreements
Agreement Points
Importance of multi-stakeholder collaboration
Jennifer Bramlette
Pedro Roque
Arianna Lepore
Adam Hadley
Dr. Erin Saltman
CTED follows an inclusive approach that brings together member states, international, sub-regional, and regional organizations, the private sector, civil society, and academia. This is an essential component of a multi-stakeholder digital environment.
PAM will continue to collaborate with the United Nations, the Internet Governance Forum, its Member States and all stakeholders to shape a safer and more equitable digital world.
The work of UNODC blends naturally with the work of CTED in the sense that normally the case is that our colleagues in CTED inform our work in the sense that thanks to their assessment and thanks to the mandate that UNODC has, which is to provide technical assistance to member states in the fight against terrorism, UNODC, and in particular its terrorism prevention branch, where I belong, put together programs, projects, in order to support and build capacity of criminal justice officials in fighting terrorism.
At Tech Against Terrorism, we have some technology, mainly the terrorist content analytics platform, the TCAP, which seeks to identify and verify terrorist content online. But we can’t do this on our own, which is why we commend the continued efforts of the GIFCT to share its resources, capabilities and know-how with the broader community.
We’ve been very grateful, even in our own fundamental advancing of how we think of what terrorist content means and looks like, having CTED and others at the table to consult with and ensure we’re always communicating what we’re trying to aim for and how we don’t overstep in counterterrorism efforts to abuse other forms of human rights, including freedom of expression.
All speakers emphasized the critical importance of collaboration between various stakeholders, including governments, international organizations, private sector, civil society, and academia in addressing the challenges of terrorist use of ICT.
Challenges in addressing terrorist use of ICT
Jennifer Bramlette
Adam Hadley
There are member states who are extremely technologically advanced who have no trouble bringing new tech in and onboarding it using virtual reality and alternate reality or augmented reality systems to test strategies, to work through contingency plans for training in the event a terrorist attack does happen, whereas other member states have trouble getting electricity to their police stations.
What we are finding is that terrorists are increasingly entrepreneurial and imaginative in how they use technologies. In many cases, they’re also going back onto the major platforms and are proving quite difficult to dislodge in a number of ways as they adapt their techniques.
Both speakers highlighted the challenges in addressing terrorist use of ICT, including the varying technological capabilities of different states and the adaptability of terrorist groups in using new technologies.
Similar Viewpoints
All three speakers acknowledge both the potential risks and benefits of AI and emerging technologies in countering terrorist use of ICT. They emphasize the need for responsible use and development of these technologies to maximize their benefits while mitigating potential harms.
Jennifer Bramlette
Adam Hadley
Dr. Erin Saltman
Developments in artificial intelligence and quantum technologies have the potential to exacerbate the risks for online harms and real-world damages. Yet, these valuable technologies offer immense benefits to society, and when used in a manner consistent with international law, they can be most useful tools for preventing and countering terrorism.
I remain hopeful that generative AI will provide capability to ensure more accurate content moderation decisions can be made and certainly encourage improved investment in generative AI to detect obvious examples of content emanating from designated terrorist organisations.
We’ve mentioned generative ai in the last few comments and it’s also a concern of what might happen when you start getting fake incident response content in and around something that might or might not have even happened how do we quickly verify and share information to stop viral spread of misinformed or actually misleading incident content and so this sort of verification process will be key to future incident response efforts
Unexpected Consensus
Need for updated legal frameworks
Jennifer Bramlette
Pedro Roque
Most states don’t even have on their books laws to deal with crimes committed through or by artificial intelligence. We’ve even been asked by authorities, like, how can we arrest a chatbot? How can we prosecute an AI?
PAM members are fully committed to fostering dialogue, cooperation and joint initiatives towards the regulation of AI and emerging technologies, thus supporting the efforts of the United Nations and the international community in this regard.
Despite coming from different perspectives (CTED and parliamentary assembly), both speakers strongly emphasized the urgent need for updated legal frameworks to address crimes committed through or by AI and emerging technologies. This unexpected consensus highlights the critical nature of this issue across different sectors.
Overall Assessment
Summary
The main areas of agreement among the speakers include the importance of multi-stakeholder collaboration, the challenges in addressing terrorist use of ICT, the dual nature of emerging technologies as both potential risks and tools for counterterrorism, and the need for updated legal frameworks.
Consensus level
There is a high level of consensus among the speakers on the core issues discussed. This strong agreement implies a shared understanding of the complex challenges in countering terrorist use of ICT and the need for collaborative, multi-faceted approaches. The consensus also suggests that future efforts in this area are likely to focus on strengthening partnerships, developing adaptive strategies to keep pace with technological advancements, and updating legal and regulatory frameworks to address emerging challenges.
Differences
Different Viewpoints
Approach to regulating AI and emerging technologies
Pedro Roque
Adam Hadley
PAM members are fully committed to fostering dialogue, cooperation and joint initiatives towards the regulation of AI and emerging technologies, thus supporting the efforts of the United Nations and the international community in this regard.
I remain hopeful that generative AI will provide capability to ensure more accurate content moderation decisions can be made and certainly encourage improved investment in generative AI to detect obvious examples of content emanating from designated terrorist organisations.
While Pedro Roque emphasizes regulation of AI and emerging technologies, Adam Hadley focuses more on the potential benefits of AI for content moderation and detection of terrorist content.
Unexpected Differences
Overall Assessment
summary
The main areas of disagreement revolve around the specific approaches to regulating and utilizing AI and emerging technologies in counterterrorism efforts.
difference_level
The level of disagreement among the speakers is relatively low. Most speakers agree on the importance of multi-stakeholder collaboration and the need to address the challenges posed by terrorist use of ICT. The differences mainly lie in the emphasis placed on various aspects of the issue, such as regulation, technological solutions, and human rights considerations. These differences do not significantly impede the overall goal of countering terrorist use of ICT, but rather highlight the complexity of the issue and the need for a comprehensive approach.
Partial Agreements
Partial Agreements
All speakers agree on the importance of multi-stakeholder collaboration in countering terrorist use of ICT, but they emphasize different aspects: CTED focuses on inclusive approach, Tech Against Terrorism highlights technological solutions, and GIFCT stresses the balance between counterterrorism efforts and human rights.
Jennifer Bramlette
Adam Hadley
Dr. Erin Saltman
CTED follows an inclusive approach that brings together member states, international, sub-regional, and regional organizations, the private sector, civil society, and academia. This is an essential component of a multi-stakeholder digital environment.
At Tech Against Terrorism, we have some technology, mainly the terrorist content analytics platform, the TCAP, which seeks to identify and verify terrorist content online. But we can’t do this on our own, which is why we commend the continued efforts of the GIFCT to share its resources, capabilities and know-how with the broader community.
We’ve been very grateful, even in our own fundamental advancing of how we think of what terrorist content means and looks like, having CTED and others at the table to consult with and ensure we’re always communicating what we’re trying to aim for and how we don’t overstep in counterterrorism efforts to abuse other forms of human rights, including freedom of expression.
Similar Viewpoints
All three speakers acknowledge both the potential risks and benefits of AI and emerging technologies in countering terrorist use of ICT. They emphasize the need for responsible use and development of these technologies to maximize their benefits while mitigating potential harms.
Jennifer Bramlette
Adam Hadley
Dr. Erin Saltman
Developments in artificial intelligence and quantum technologies have the potential to exacerbate the risks for online harms and real-world damages. Yet, these valuable technologies offer immense benefits to society, and when used in a manner consistent with international law, they can be most useful tools for preventing and countering terrorism.
I remain hopeful that generative AI will provide capability to ensure more accurate content moderation decisions can be made and certainly encourage improved investment in generative AI to detect obvious examples of content emanating from designated terrorist organisations.
We’ve mentioned generative ai in the last few comments and it’s also a concern of what might happen when you start getting fake incident response content in and around something that might or might not have even happened how do we quickly verify and share information to stop viral spread of misinformed or actually misleading incident content and so this sort of verification process will be key to future incident response efforts
Takeaways
Key Takeaways
Terrorist use of ICT and emerging technologies poses a growing threat that requires coordinated multi-stakeholder efforts to address
There is a need for updated laws, regulatory frameworks, and improved technological capabilities to counter terrorist use of ICT
Public-private partnerships and cross-sector collaboration are essential for effective counterterrorism efforts online
Emerging technologies like AI present both risks and opportunities for counterterrorism efforts
Balancing security measures with human rights and fundamental freedoms remains a key challenge
Resolutions and Action Items
CTED to develop non-binding guiding principles for member states on countering terrorist use of ICT
UNODC to expand its Practical Guide on Handling Electronic Evidence to include FinTech providers
Tech Against Terrorism to continue 24/7 capability to respond to major terrorist attacks
GIFCT to host regional workshops for knowledge exchange on local extremist trends
Unresolved Issues
How to effectively regulate terrorist-operated websites and domain names
Addressing jurisdictional complexities in cyberspace
Developing laws to deal with crimes committed through or by artificial intelligence
Balancing content moderation and free speech concerns
Verifying information during incident response in the age of AI-generated content
Suggested Compromises
Using both list-based and behavior-based approaches to identify terrorist content online
Balancing technological solutions with human input and context for content moderation
Considering both risks and opportunities of emerging technologies like AI in counterterrorism efforts
Thought Provoking Comments
Historically, the terrorist use of the internet has been seen as a tactical tool for recruitment and radicalization, but increasingly our concern is that the internet is becoming a strategic battleground for terrorists and hostile nation states, but mainly for terrorists.
speaker
Adam Hadley
reason
This comment introduces a paradigm shift in how we view terrorist use of the internet, framing it as a strategic rather than merely tactical tool. This perspective challenges existing assumptions and broadens the scope of the discussion.
impact
It set the tone for a more comprehensive examination of terrorist activities online, leading to discussions about infrastructure, strategic communications, and the need for a more holistic approach to countering terrorist use of the internet.
Can terrorists and should terrorists be allowed to run their own websites? Should ISIS or Al-Qaeda have the right to buy their own domain name? If not, what should we do about it?
speaker
Adam Hadley
reason
These questions highlight a critical gap in current internet governance and counterterrorism efforts. They force consideration of complex issues around freedom of speech, internet regulation, and the practical challenges of countering terrorist infrastructure online.
impact
This comment shifted the discussion towards the need for clearer international frameworks and jurisdictional agreements to address terrorist-operated websites, emphasizing a gap in current counterterrorism efforts.
We realized very quickly, and in consultation with human rights experts and civil society organizations, there is an Islamist-extremist bias in most lists in a post-9-11 framework, and we wanted to get at some of the neo-Nazi and white supremacy attacks that we know are taking place in different parts of the world.
speaker
Dr. Erin Saltman
reason
This insight reveals a critical bias in existing counterterrorism frameworks and demonstrates a commitment to a more comprehensive and equitable approach to identifying terrorist content.
impact
It led to a discussion about the evolution of GIFCT’s approach, including the development of behavior-based buckets for identifying terrorist content, showing how the field is adapting to address a wider range of extremist threats.
Even a standard agreed upon entity like Islamic State, if I were to have you surface an image and it’s a guy in the back of a Toyota, it’s really hard to know if that is foreign terrorist fighter imagery or if that is literally just a man in the back of a Toyota.
speaker
Dr. Erin Saltman
reason
This example vividly illustrates the complexities involved in content moderation and the limitations of purely technological solutions in identifying terrorist content.
impact
It underscored the need for human input and cross-sector knowledge sharing in counterterrorism efforts, leading to a discussion about the importance of local context and nuanced understanding in content moderation.
Overall Assessment
These key comments shaped the discussion by broadening the perspective on terrorist use of the internet from tactical to strategic, highlighting critical gaps in current approaches, addressing biases in existing frameworks, and emphasizing the complexities involved in identifying and moderating terrorist content. They collectively pushed the conversation towards more nuanced, comprehensive, and collaborative approaches to countering terrorist use of the internet, while also highlighting the ongoing challenges and the need for continued evolution in this field.
Follow-up Questions
How can we arrest a chatbot or prosecute an AI?
speaker
Jennifer Bramlette
explanation
This highlights the legal challenges in addressing crimes committed through or by artificial intelligence, which many states are unprepared for.
Are there any plans for UNODC or any other entity to build a model law for crimes committed through or by artificial intelligence?
speaker
Jennifer Bramlette
explanation
This suggests a need for international guidance on legislating AI-related crimes.
How can we address the jurisdictional complexities in cyberspace, particularly regarding content that may be illegal in one country but not in others?
speaker
Jennifer Bramlette
explanation
This highlights the need for international cooperation and standardization in addressing online terrorist content.
How can we improve data access for analyzing terrorist content on large platforms while respecting data privacy concerns?
speaker
Adam Hadley
explanation
This addresses the challenge of effectively monitoring large platforms for terrorist content while balancing privacy concerns.
Should terrorists and designated terrorist organizations be allowed to run their own websites or buy domain names? If not, what should be done about it?
speaker
Adam Hadley
explanation
This raises important questions about internet governance and the limits of online freedoms for designated terrorist groups.
How can we improve clarity about jurisdiction and standardization of responses regarding terrorist-operated websites?
speaker
Adam Hadley
explanation
This suggests a need for international cooperation in addressing terrorist use of internet infrastructure.
How can we quickly verify and share information to stop viral spread of misinformed or misleading incident content, particularly in the context of generative AI?
speaker
Dr. Erin Saltman
explanation
This addresses the challenge of combating misinformation during terrorist incidents, especially with the rise of AI-generated content.
How can we further develop and implement positive interventions using AI technology for counter-narratives, redirecting, and translation in areas where moderators are blind?
speaker
Dr. Erin Saltman
explanation
This explores the potential positive applications of AI in countering terrorism and violent extremism online.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online