Open Forum #45 Advancing Cyber Resilience of Critical Infrastructure

Open Forum #45 Advancing Cyber Resilience of Critical Infrastructure

Session at a glance

Summary

This open forum discussion focused on advancing cyber resilience of critical infrastructure in an increasingly connected world where malicious actors frequently target essential services. The panel brought together diplomatic and technical experts to explore how different communities can collaborate more effectively to strengthen cybersecurity defenses.


Pavel Mraz from UNIDIR outlined the alarming threat landscape, noting that nearly 40% of state-sponsored cyber operations in 2024 targeted critical infrastructure, with ransomware attacks surging by 275% and global cybercrime losses exceeding $10 trillion. Timea Suto from the private sector emphasized that companies face diverse threat actors including state-sponsored groups, criminal organizations, and insider threats, but stressed that even well-funded private entities cannot combat these challenges alone without government support and public-private partnerships.


Floreta Faber shared Albania’s experience with major cyber attacks in 2022, highlighting key lessons about cybersecurity being a mindset issue requiring involvement from all organizational levels, not just technical teams. She described significant reforms including expanding their cybersecurity authority from 20 to 85 people and increasing critical infrastructure designations by 50%. Caroline Troein from ITU discussed capacity building efforts, particularly the importance of national CERTs and cyber exercises that simulate real-world attacks to foster cross-sectoral coordination and build trust between stakeholders.


Lars Erik Smevold provided the energy sector perspective, emphasizing that resilience requires understanding cyber-physical systems and conducting regular drills with operational staff. He stressed the importance of cross-border cooperation, particularly in interconnected electricity grids. The discussion concluded with calls for bridging diplomatic and technical communities through practical cooperation frameworks, shared exercises, and inclusive capacity building that translates international norms into real-world protection measures.


Keypoints

## Major Discussion Points:


– **Current Cyber Threat Landscape for Critical Infrastructure**: The discussion revealed alarming statistics, with nearly 40% of state-sponsored cyber operations in 2024 targeting critical infrastructure sectors like energy, healthcare, finance, water, and telecommunications. Ransomware attacks surged by 275%, and global cybercrime losses exceeded $10 trillion, making cybercrime equivalent to the world’s third-largest economy if measured by GDP.


– **Multi-Stakeholder Collaboration and Breaking Down Silos**: A central theme emphasized the critical need to bridge gaps between diplomatic and technical communities, strengthen public-private partnerships, and foster cross-sectoral cooperation. Panelists stressed that no single actor—whether government, private sector, or international organization—can secure critical infrastructure alone.


– **National Experiences and Lessons Learned**: Albania’s experience with major cyber attacks in 2022 provided concrete insights into building resilience, including the importance of expanding from purely technical approaches to comprehensive capacity building, increasing staff from 20 to 85 people, and implementing new legal frameworks based on the NIS2 directive.


– **Capacity Building and Practical Implementation**: The discussion highlighted the vital role of cyber drills, tabletop exercises, and training programs in building national resilience. These practical tools help translate international frameworks and norms into real-world protection while fostering trust and coordination between different stakeholders who may not have previously interacted.


– **Policy Frameworks and International Cooperation**: Panelists explored how UN frameworks for responsible state behavior in cyberspace can be operationalized through practical measures like point-of-contact directories, crisis communication protocols, and regional cooperation models, while emphasizing the need for “smarter policy, not more regulation.”


## Overall Purpose:


The discussion aimed to explore how to advance cyber resilience of critical infrastructure through enhanced cooperation between diplomatic and technical communities, sharing of best practices and lessons learned, and development of practical frameworks for protecting essential services that underpin modern society.


## Overall Tone:


The discussion maintained a professional yet urgent tone throughout, beginning with sobering statistics about the threat landscape but evolving into a more constructive and solution-oriented conversation. While acknowledging the serious challenges and complexities involved, panelists remained optimistic about progress being made and emphasized practical, collaborative approaches. The tone was notably inclusive and emphasized mutual learning, with speakers from different sectors and regions sharing experiences openly and building on each other’s insights.


Speakers

– **Marie Humeau**: Moderator of the session


– **Floreta Faber**: Deputy Director General Envoy for Cyber Diplomacy, Director of International Project Coordinator and Strategic Development of Cybersecurity at the National Cyber Security Authority of Albania


– **Lars Erik Smevold**: Security and Processor Control Architect, R&D IT and ICS at Stratgraft (energy sector)


– **Pavel Mraz**: Cybersecurity researcher at UNIDIR (UN Institute for Disarmament Research)


– **Caroline Troein**: Cybersecurity Division at the ITU (International Telecommunication Union)


– **Ms. Timea Suto**: Global Director Policy Lead (private sector perspective on critical infrastructure protection)


– **Mr. Akhil Thomas**: Strategy and Operation Manager at the Global Forum for Cyber Expertise (session summarizer)


– **Participant**: Works for an IT company owned by the Church of Norway (identified as Eirik)


**Additional speakers:**


– **Gautam Kaila**: Chief Executive Officer of the Global Cyber Forum (mentioned in introduction but did not speak during the recorded portion)


Full session report

# Advancing Cyber Resilience of Critical Infrastructure: A Multi-Stakeholder Forum Discussion


## Executive Summary


This comprehensive open forum discussion brought together diplomatic and technical experts to address the urgent challenge of strengthening cyber resilience for critical infrastructure in an increasingly interconnected world. The session, moderated by Marie Humeau, featured perspectives from international organisations, national governments, the private sector, and technical specialists, all united by the recognition that malicious actors are increasingly targeting essential services that underpin modern society.


The discussion revealed a concerning threat landscape whilst highlighting promising avenues for enhanced cooperation. Through detailed presentations and interactive dialogue, participants explored how different communities can collaborate more effectively to strengthen cybersecurity defences, moving beyond traditional silos to create comprehensive protection frameworks for critical infrastructure.


## Current Threat Landscape and Scale of Challenge


Pavel Mraz from UNIDIR opened the discussion by presenting statistics that established the gravity of the current cybersecurity environment. He reported that nearly 40% of state-sponsored cyber operations in 2024 specifically targeted critical infrastructure sectors, including energy, healthcare, finance, water, and telecommunications. This targeting represents a significant shift in the threat landscape, with essential services becoming primary objectives rather than collateral targets.


Mraz highlighted the economic scale of cybercrime, noting that global cybercrime losses have reached substantial levels, with ransomware attacks showing significant increases. He emphasised the evolution of attack methodologies, particularly the rise of supply chain attacks that leverage a “target one, compromise many” principle, allowing threat actors to reach multiple downstream customers through a single successful breach.


The UNIDIR researcher also introduced the UN framework for responsible state behaviour in cyberspace, particularly norm F, which prohibits attacks on critical infrastructure. He stressed the importance of translating these frameworks into practical measures through national legislation and institutional coordination.


## Private Sector Challenges and Investment Needs


Timea Suto, representing the private sector perspective, outlined the diverse threat landscape confronting private entities, which includes state-nexus actors, organised cybercriminal ecosystems, and insider threats from employees or contractors. She detailed significant investments being made by private sector organisations, including implementation of zero-trust architectures, vulnerability management programmes, supply chain security assessments, and incident response plans.


However, Suto emphasised a critical limitation: “Even well-funded private entities cannot deter state-sponsored actors or dismantle global criminal networks alone.” This led to her call for a fundamental shift in policy approaches, advocating for “smarter policy focused on incentives rather than more regulation, with rebalanced responsibility between private and public sectors.”


Suto stressed the importance of inclusive policymaking processes that give all stakeholders a meaningful voice in developing critical infrastructure protection frameworks. She argued that governments should take a more active role in disrupting threat actors whilst allowing private companies to focus on operational security and innovation.


## National Experience: Albania’s Response to Cyber Attacks


Floreta Faber, Albania’s Deputy Director General Envoy for Cyber Diplomacy, shared Albania’s experience with major cyber attacks in 2022 that targeted the country’s e-government services. Her most significant insight was reframing cybersecurity from a purely technical challenge to a comprehensive organisational issue.


“We understood that talking about cyber security it’s not talking about technology, it’s talking about a mindset, it’s talking involving more people from the top management to the simple employee inside every organisation that cyber security is something everyone needs to focus on,” Faber explained.


Following the attacks, Albania dramatically expanded its cybersecurity authority from 20 to 85 people and expanded its critical infrastructure designations by 50%. The country also implemented new legislation based on the NIS2 directive and established regular cyber drills to build understanding between stakeholders.


Faber described Albania’s long-term approach to building regional cooperation through youth engagement, establishing cyber camps for young people in the region. “We believe those are things which take time. And sometimes they prevent you not talking to each other for different trust reasons, which are not only cyber security,” she noted, acknowledging that technical cooperation cannot be separated from broader geopolitical contexts.


## International Capacity Building Efforts


Caroline Troein from the International Telecommunication Union provided insights into global capacity building efforts, noting that “many of the issues that developing countries are facing are ones that developed countries are facing. Are you being agile? Do you have the right people in the right places? Are the stakeholders actually coordinating?”


Troein reported that ITU receives requests from multiple countries for cybersecurity support, including CERT establishment, strategy development, and specialised training programmes. She emphasised the critical role of national Computer Emergency Response Teams (CERTs) as the first line of defence, noting they require legal mandates, operational structures, sustainable funding, and continuous training to be effective.


The ITU representative highlighted the importance of cyber exercises that simulate real-world attacks and test response mechanisms. She noted that whilst countries now have more cybersecurity measures than ever before, challenges persist in coordination and implementation, suggesting that the bottleneck is not necessarily in individual components but in how these elements work together as integrated systems.


## Energy Sector Operational Realities


Lars Erik Smevold, representing the energy sector as a Security and Processor Control Architect, provided insights into the operational realities of protecting critical infrastructure. He defined resilience as the ability to “anticipate, prepare for, respond to, recover from, and learn from disruptions.”


Smevold emphasised the unique challenges of cyber-physical systems, where cybersecurity measures implemented on one system can affect other interconnected systems. This is particularly relevant in the energy sector, where cross-border electricity grid connections require coordinated responses between Nordic and European transmission system operators.


He stressed the importance of involving operational staff in cybersecurity preparations and noted that technical specialists need better understanding of different critical infrastructure sectors. Smevold also contributed to discussions on bridging technical and diplomatic communities, suggesting that these communities need informal arenas to meet and build understanding of each other’s work and resource needs.


## Building Trust and Communication Networks


A recurring theme throughout the discussion was the critical importance of pre-established relationships and communication channels. Mraz introduced a compelling metaphor: “You cannot exchange business cards in a hurricane when a real cyber crisis hits, and you need assistance from abroad… You need to have all these channels, the trust, and the network already in place to know where to reach out.”


Faber described practical approaches to building regional cooperation through informal communication channels, including regular information sharing platforms. She emphasised that trust-building requires long-term investment and can be developed through professional networks that persist beyond specific projects or initiatives.


The discussion revealed that effective cooperation requires both formal structures and informal mechanisms. Whilst official frameworks and protocols are necessary, the human relationships and mutual understanding that enable effective cooperation often develop through informal interactions and shared experiences.


## Information Sharing Challenges


A significant challenge addressed was sharing sensitive cybersecurity information across borders. A participant from an IT company asked: “How can we make arrangements for sharing sensitive technical data across borders without making it public, while still allowing technical people to defend their systems better?”


This question highlighted a fundamental tension in cybersecurity cooperation: the need to share threat intelligence to enable collective defence whilst maintaining operational security. Faber responded by describing Albania’s approach to building regional cooperation through informal communication channels, representing practical mechanisms that technical professionals can use to build relationships and share information.


The discussion emphasised that information sharing requires sustained engagement and trust-building through professional networks and alumni connections that persist over time.


## Bridging Technical and Diplomatic Communities


Multiple speakers recognised that effective cybersecurity requires both technical expertise and diplomatic coordination, yet these communities often operate separately. Faber described Albania’s approach of bringing experienced diplomats into technical organisations, creating important translation capabilities between communities.


Smevold reinforced this theme by suggesting informal meeting opportunities and cross-visits between technical facilities and diplomatic offices. The discussion revealed that bridging these communities requires both formal structures and informal mechanisms, with understanding built through direct exposure to each other’s working environments and challenges.


## Areas of Consensus and Practical Recommendations


Despite the complexity of the challenges discussed, participants demonstrated strong consensus on several key points. There was universal agreement that multi-stakeholder collaboration is essential, with no single actor capable of addressing cyber threats alone. All participants agreed on the importance of capacity building and training that goes beyond technical skills to include awareness at all organisational levels.


The discussion generated several concrete recommendations:


– Countries should designate points of contact for crisis communication and establish pre-crisis trust networks


– Technical and diplomatic communities need more informal meeting opportunities to build mutual understanding


– Development of secure channels for sharing sensitive threat information across borders between technical professionals


– Strengthening regional cooperation through platforms like CERT-to-CERT information sharing


– Investment in long-term trust-building initiatives, including youth engagement programmes


– Translation of UN cyber norms into practical national frameworks with clear legal mandates and operational structures


## Ongoing Challenges


Several significant challenges remain unresolved. The question of how to effectively share sensitive technical threat information across borders whilst maintaining security represents a fundamental operational challenge. Balancing regulatory requirements with operational flexibility for private sector critical infrastructure operators remains an area where different stakeholders advocate for different approaches based on their experiences.


The fragmentation of critical infrastructure definitions and frameworks across different countries creates coordination challenges that may require improved mapping and translation between different national approaches. Additionally, scaling cybersecurity capacity building to meet global needs represents a resource challenge that may require innovative approaches to knowledge transfer and peer-to-peer learning.


## Conclusion


This comprehensive discussion demonstrated both the complexity of protecting critical infrastructure in the digital age and the potential for enhanced cooperation across traditional boundaries. The participants’ emphasis on cybersecurity as fundamentally a human and organisational challenge, rather than merely a technical one, represents a mature understanding that has significant implications for policy and practice.


The discussion’s focus on practical cooperation mechanisms—from informal communication channels to structured exercises and cross-community engagement—offers concrete pathways for translating high-level commitments into operational improvements. The emphasis on trust-building as a long-term strategic investment provides a foundation for sustainable cybersecurity cooperation.


Whilst significant challenges remain, particularly around information sharing mechanisms and regulatory approaches, the level of consensus achieved on fundamental principles provides a strong foundation for continued progress. The participants’ recognition that no single actor can secure critical infrastructure alone, combined with their practical suggestions for enhanced cooperation, offers pathways for more resilient and collaborative approaches to protecting the essential services upon which modern society depends.


Session transcript

Marie Humeau: Thank you and welcome to our open forum. We want to discuss with you how to advance cyber resilience of critical infrastructure. In an ever more connected world, not only people are more connected, but also the critical infrastructure we rely on. The resilience of critical infrastructure that are increasingly targets of malicious actors is key. A robust cyber resilience measures is therefore vital. In an environment where incidents could have overspilling effect on international peace and security, there are risk of escalation. We need to look at overcoming the silos between diplomatic and technical communities, strengthening national and cross-border CERT to CERT cooperation and fostering multi-stakeholder engagement. The idea of this discussion came from the observation that different communities have an important role to play, but that they need to be offered more opportunities to share expertise and knowledge. To get better informed and to get a greater understanding of what each community is doing and how we can support one another in our work to build a resilient cyberspace. We will explore all of this with our distinguished panel today. My name is Marie Meaux and I will be your moderator today. I’m happy to introduce you to our cross-community panel. On my right is Floreta Faber. She is Deputy Director General Envoy for Cyber Diplomacy, Director of International Project Coordinator and Strategic Development of Cybersecurity at the National Cyber Security Authority of Albania. On my left is Lars-Erik Smethel, Security and Processor Control Architect, R&D IT and ICS at Stratgraft. Online I also have three panelists, Mr. Pavel Mraz, who works as a cybersecurity researcher at UNIDIR, Caroline Trine, Cybersecurity Division at the ITU, and Timea Souto, our Global Director Policy Lead. and Mr. Gautam Kaila, the Chief Executive Officer of the Global Cyber Forum. To facilitate my work and our reporting, I will also ask Akhil Thomas, Strategy and Operation Manager at the Global Forum for Cyber Expertise, to summarize the discussion in a few words at the end of the session. We also will look at your active participation, so please prepare some questions for the Q&A session. Because it’s a very rich issue, I’m going to stop talking now and I’m going to ask the question to my panelists. And I will start with really looking at the threat landscape and the national experience on how to really build an efficient critical infrastructure protection. And for this, I will start with asking Pavel Mraz online the first question. What does today’s global cyber threat landscape look like for critical infrastructure and where are the biggest vulnerabilities emerging?


Pavel Mraz: Marie, thank you for the floor and good morning to Oslo to everyone and also good day to those connecting online. To your question, the UN Institute of Disarmament Research will have a research report coming out summarizing the main threats of 2024 in cyberspace. And let me give you a few highlights, specifically focusing on critical infrastructure. When it comes down to critical infrastructure, the cyber threat landscape in 2024 has grown increasingly complex. It became clear that critical infrastructure remains both an attractive target for financially motivated actors, and also a strategic target for some state affiliated actors. In 2024, alarmingly, nearly 40% of all documented cyber operations by states have focused on critical infrastructure, including targeting sectors such as energy, healthcare, finance, water, and telecommunications. And of course, these sectors are foundational. and Mr. Sajjan Dharma. As a result, we have seen last year a surge in ransomware attacks by 275%. And global financial losses from cybercrime disruptions exceeded US$10 trillion last year. To put it in other words, if cybercrime was a country measured by GDP, it would have, it would be the third world’s largest economy. We, of course, also see attacks on digital supply chains. These are becoming more prominent. And leveraging the principle, target one, compromise many, malicious cyber actors now increasingly use supply chain attacks to target downstream customers, including critical infrastructure operators. Importantly, internet infrastructure, which includes satellites, undersea cables, and data centers are also increasingly vulnerable and targeted by cyber attacks. And these type of threats raise concerns about widespread interruption of critical digital services, particularly in times of heightened geopolitical tensions. Even the UN system itself and humanitarian operations are not exempt from cyber attacks. According to the UN latest reporting, over 50% of cyber threats targeting the UN in 2024 came from advanced precision. and Mr. Steven Cooley. I’m excited to be here today. I’m here to talk about cyber attacks. Cyber attacks are a common threat to many countries. They are often associated with a number of existing threat actors, which include states. And these attacks have disrupted critical aid operations and endangered vulnerable populations. Taken together, these trends show that cyber attacks are becoming a question of when for many organizations, not a question of if. And no sector or state can contain cyber risks alone. And as infrastructure becomes more digital, interconnected, securing these types of infrastructure will require both a multi-level and a multi-functional approach. And this is a challenge that we are facing. And I’m excited to be here today to talk about cyber attacks. I’m here to talk about cyber attacks. I’m here to talk about cyber attacks. And this is a challenge that we are facing. And I’m excited to be here today to talk about cyber attacks. And as infrastructure becomes more digital, interconnected, securing these types of infrastructure will require both a multi-level, multi-stakeholder cooperation, but also resilience planning and preparing for when cyber attacks hit. Positively, the UN member states have acknowledged these risks, with states calling for greater protection of critical infrastructure. Particularly those that deliver essential services across borders. And also, states have called for reinforcing an international taboo against targeting these types of systems. But of course, a number of states also indicated that they will be protecting these types of systems. But of course, a number of states also indicated that they will be protecting these types of systems. And this is a challenge that we are facing. And I’m excited to be here today to talk about cyber attacks on our city. But obviously, states have called for reinforcing an international taboo against targeting these types of systems. But also, states have called for reinforcing an international taboo against targeting these types of systems. And of course, a number of states also indicated that they will be protecting these types of systems. But obviously, states have called for reinforcing an international taboo against targeting these types of systems. But also, states have called for reinforcing an international taboo against targeting these types of systems. But of course, a number of states also indicated that they will be protecting these types of systems. But obviously, states have called for reinforcing an international taboo against targeting these types of systems. But also, states have called for reinforcing an international taboo against targeting these types of systems. And that will require strong cross sectoral, and cross border cooperation, and also practical tools. And that will require strong cross sectoral, and cross border cooperation, and also practical tools. Including adopting national frameworks, using cyber drills and stepping up capacity building to translate shared global principles into real world protection on the ground Including adopting national frameworks, using cybersecurity drills and stepping up capacity building to translate shared global principles into real world protection on the ground Including adopting national frameworks, using cybersecurity drills and stepping up capacity building to translate shared global principles into real world protection on the ground Including adopting national frameworks, using cybersecurity drills and stepping up capacity building to translate shared global principles into real world protection on the ground Including adopting national frameworks, using cyberdrills and stepping up capacity building to translate shared global principles into real world protection on the ground I need to talk about these in more detail later on. But I will leave it at that for now and over back to you Marie. I need to talk about these in more path now and over back to you Marie.


Marie Humeau: Thank you very much Pavel and thank you for this very clear scene setter. I think now that we have looked at the threats and the more just scary things, I think we’ll also look at the resiliency and how to strengthen really our cyberspace. But Timea first. Maybe on your side Timea, from a private sector lens. who are the main threat actors targeting critical infrastructure, and how is the industry adapting? But also, what is needed is to strengthen the resilience of the private sector. So, Timea, over to you.


Ms. Timea Suto: Thanks very much, Marie, and I’d just like to preface that everything I say here today it’s written in much more detail in a report that ICC has published at the IGF last year on the protection of critical infrastructure and their supply chains, and that’s available in English, Spanish, and Chinese, as well as Arabic. So, if you want to hear more about what I try to cram into my short interventions, please take a look at the report, and I’ll put the link in the chat later on. To answer your question, Marie, from a private sector perspective, the threat landscape facing critical infrastructure has never been more serious or diverse. We are seeing a broad range of actors, each with their distinct motivations and capabilities that target essential services that underpin our economies and societies. On one end of the spectrum, we have the state nexus threat actors, often referred to as advanced persistent threats, or APTs. These actors are often supported by governments, military, or intelligence institutions, and they are typically well-funded, highly skilled, and capable of executing long-term complex operations. And their objectives vary from disrupting services and accessing sensitive information to advancing geopolitical interests or undermining public trust in institutions, and they can target both public and private sector entities. At the same time, the private sector must contend with increasingly organized cyber-criminal ecosystems. These criminal groups are often globally distributed and structured in ways that make them resilient to takedowns and prosecution, while also ransomware as a service has made it possible for even relatively unsophisticated attackers to cause major disruptions. Thirdly, there are insider threats that are also a significant concern. These are individuals, whether militia, We have a small group of people that are very ambitious or simply negligent, that could be employees or third party contractors to critical infrastructure services, who often have privileged access and fewer securities checks. And even a small mistake on their part or intentional sabotage can have a big cascading real-world consequences. What makes all of these threats more dangerous is the interconnected nature of our infrastructure systems, right. A compromise in one sector, say electricity, can ripple into others like healthcare, telecommunications or transportation. And these aren’t just IT risks, these are national and global security concerns. Cyberattacks on critical infrastructure can lead to service outages, physical destruction or even endanger lives. And it’s not just about keeping these systems aligned, it’s about making sure that these attacks don’t compromise the confidentiality and integrity of data that can lead to long-lasting consequences like identity theft or misinformation, which can cause havoc long after the incident has been dealt with, right. So how is the private sector responding to this? It is actually stepping up, making significant investments in cybersecurity resilience. We are seeing growing adoption of zero-trust architectures, continuous patching and vulnerability management, strong data backups, supply chain risk assessments. Companies are building robust incident response plans and embedding cybersecurity by design into their systems. So there’s a lot that the private sector does, but it is critical to be clear-eyed about the limits of what the private sector can actually do by its own. Even the best-funded private entities cannot deter state-sponsored actors or take down global criminal networks on their own. Cybersecurity, especially in the context of critical infrastructure, is a shared responsibility between government and industry. So to strengthen the resilience, I think there are four things that are critical. First, governments must play a more active role in disrupting threat actors, enforcing laws and creating accountability in cyberspace. This includes strengthening national capabilities, supporting law enforcement collaboration across borders, and fully implementing the existing international norms and frameworks of responsible state behavior in cyberspace. Secondly, we need more stronger and operational public-private partnerships, not just during the crises themselves, but in the ongoing governance and design of security measures. This includes real-time threat intelligence sharing, joint exercises, collaborative development of standards and guidelines, and many more. Third, we need to invest in capacity building and resilience, especially in sectors or regions where cybersecurity maturity is still developing. And last but not least, we need to strike the right balance between regulatory obligations and the sustainability of security controls. Regulations should be clear, risk-based, and consistent across borders. At the same time, voluntary standards and flexible frameworks can allow companies to adapt quickly to emerging threats and invest in the most effective protections. So to conclude, protecting critical infrastructure requires continuous investment, cooperation, and innovation. The private sector is deeply committed to strengthening its defenses and ensuring business continuity, but without decisive government action and deep ongoing collaboration, we will not be able to keep pace with the evolving threat environment that Pavel has been talking about earlier. Thanks, Marie.


Marie Humeau: Thank you, Timéa. I think you point out the importance of what we have to do together, and that no one can achieve anything on their own, and that really the stakeholder needs to work together. So now I will go to Floretta. Unfortunately, Albania has suffered recent cyber attacks. So can you maybe share some lessons? Because that’s how also it works, is sharing best practices, lesson learned. And how to be more resilient? Can you also give us some idea on how, in that time, the diplomatic and the technical community collaborated during the response? So Floretta, the floor is yours.


Floreta Faber: Thank you very much. This is a great opportunity to be here in this very honored panel and speak about the case of Albania. Yes, it is true in mid-2022 we had a big cyber attack on the e-gov services and Albania is a government which has today over 1,200 e-services to the Albanian citizens. Over 95% of all our services to citizens are online so hitting that system was really something which was was aiming to disrupt our work to the citizens, to disrupt their trust to the government and it was a long and and very important process for us because we were fighting corruption, we were bringing more efficiency to citizens and we were really focused on on doing our best but then this was kind of a wake-up call for us because as we focused so much on on having a technological advancement on responding to cyber security in 2022 when we did have a law on cyber security according to the NIS 1 directive by then we did have an authority on cyber security and we thought we had it covered. We understood that talking about cyber security it’s not talking about technology, it’s talking about a mindset, it’s talking involving more people from the top management to the simple employee inside every organization that cyber security is something everyone needs to focus on. The investment need to be in technology but capacity building is also important for training people and also people who are not technical have the right mindset and awareness that even one mistake in one person inside a big organization can allow the that a simple attack become a big incident on cyber security. So these were the main lessons on 2022. We made big changes in the country, really big reforms legally on making a new law on cyber security on 2024 according to the NIS2 directive. As we are talking for the critical and important infrastructures, this week actually we’re expecting the government to approve the new list of critical and important infrastructure, which we build according to the new procedures, a new methodology according to the NIS2 directive. And a big change, it has been not only working with all the critical infrastructures and their technical employees, but also going beyond that and looking at the procedures, looking at how people are trained, looking at every employee inside organizations, public or private sector, to really have a focus on why they need to be focused and understanding that on cyber security and the cyber attacks, it’s not simply a password which needs to be more secure. So it’s people who need to look at every email, at every message they get, to make sure that the links that they’re opening, they are safe and they can continue their business or private life really in a secure manner. They have been big changes inside the authority, we had about 20 people, now we’re going to 85 people inside the authority. The list of critical infrastructures is increased by 50% with the new methodology. We work really on a daily basis with all the critical and important infrastructures, with the big state-sponsored cyber attack of 22 was not one and alone. it has continuously, we’re being continuously under those attacks. The last one practically happened last week, which was really a severe attack on the Tirana municipality. And our technical teams are like the big changes that in 2022, it was difficult to have a group, a good group of experts to work on the case. But in cases like today, in over one and a half years now, we have only the team that goes from the authority on cybersecurity, working closely with the team cybersecurity teams inside the organizations in trying, first of all, what’s important to bring back the services and also go back and do the reverse engineering, find out what happened, where the attack came from. And this is where the important part is, what do we do with the attribution? When we find out at the end where the attack came from, which is not, at least in the last cases, it has happened. We have had about over 80 attempts last year and 32 attacks because became incidents. And we dealt with all the cases successfully. But what we fear as in every country, I believe, is that if the attacks are severe, if the attacks go more than in one infrastructures, how our capacities are to respond to those and then how we we work with the diplomatic community actually to deal with the cases. Now, I’ve been part of many UN and a number of UN open-ended working group, which gave us a good understanding how countries in the world actually act or react in case of big cyber attacks and in case of incidents. There is a system where every country can do that. Maybe some countries need to be more active, but at least from the Albanian side, in the last over a year now, every Friday, we send all the information that we can make public and share with the other CERTs. And those are practices which we need to enforce also with the diplomatic community. Different regions in the world have different experiences, like in Asia or Baltic countries So we all come with our own difficulties sometimes in talking to each other when it comes to political level or diplomatic level. And then is, of course, very important the technical side. So first, we need to make everyone aware that all those groups need to communicate with each other in all the kind of preparation time that we do in order to be able to protect ourselves, but also know how to communicate when there is a cyber incident. First, because we want to share what happened, be able to share what happened, be able to protect other critical infrastructures on the same field or on the same category. As we know, the cyber attacks can go cross-border sometimes very easily. So it can happen to us, but it can happen to, unfortunately, to every other country. So we need to be prepared and have… very, very clear how we communicate in cases of cyber attacks. So, through UN or through OSCE or different regions of the world on different type of groups, we have agreed on confidential building measures where protecting critical and important infrastructures is really one of the key pillars on which we always look at. So, maybe I’ll stop here and if you have more questions, I’ll come back.


Marie Humeau: Thank you very much, Floreta. I think you already point out certain of the points we will come back to at a later stage on the cooperation and the framework and the way ahead. But before we jump into this, I still have two speakers for the first part. So, you mentioned the need for political commitment, for clarity. You mentioned the growing number of critical infrastructure and actually the need to invest in tech and capacity building. So, talking about capacity building, I will now give the floor to Caroline because the ITU does a lot of capacity building with national certs. So, maybe you can explain to us how that works and how does the role of the cross-sectoral cooperation works, the importance of having some simulation exercise, for example. And also, maybe you can tell us a bit about the kind of requests that the ITU receive and how actually you address those requests to efficiently protect critical infrastructure. Caroline, the floor is yours.


Caroline Troein: Thank you, Marie. I’d like to start actually on a positive note because we’ve heard a lot about the increasing challenges that the countries are facing. But according to the ITU’s Global Cybersecurity Index, countries now actually have more cybersecurity measures in place than ever before. So, that means that there are more laws, more technical capabilities, more strategies, more trainings, more cooperation. Great. The challenge, and echoing what Marie said, and others have said is that now countries really need to think about how do I enhance my maturity, sharpen my responsiveness, adapt to the new challenges that, for example, AI brings, and even maybe prepare for things like what would a quantum future look like. As Marie mentioned, we work on, in part, on national certs, and we really see them as foundational to cyber resilience because they serve as that first line of defense against ICT threats targeting critical infrastructure in particular. Now as countries evolve, they may develop like a cybersecurity agency, but the core of responsibilities for incident response is still with that cert. Going to the point made earlier, cybercapacity building should not be just a technical thing. While certs are key and are that front line, they need to have a legal mandate, they need to have clear operational structures, they need to have sustainable funding. All of these form part of what makes a successful cert. And they also need to have that continuous training and the ability to adapt to what comes next. And that’s where things like cyber drills, which are the cyber exercises IT does, can be a really vital tool because they aim to simulate real world attacks, test national response mechanisms, and then foster cross-sectoral coordinations. Ideally also they help bridge the gap between that technical audience and non-technical communities, which is a big challenge in protecting critical infrastructure. I want to bring in an example here. I was recently in a country where we ran some exercises specifically focused around critical information infrastructure, so a subset there. And for this, we had some trainings, what they should be aware of in terms of their national regulations that were relatively new, understanding what the roles of the different actors were. were in the different dependencies that existed. And it was interesting to see the shift in mentality that started to happen with many of the participants who, firstly, while it was a relatively small country, most of the stakeholders there had not interacted before and had not interacted around these topics particularly. The mentality shifts then started to build trust because they saw how they had connections to each other, how they could help each other, and how they could move from a tick-in-the-box exercise that the regulator might have been putting in place to thinking proactively about what can they build as methods and pathways for sharing information. Like Feretta was saying, how do you actually share that information in a timely way? What structures do we need in place? What are the vulnerabilities that we haven’t, that may be uncomfortable to talk about? Only when you have trust can you actually begin to talk about those limitations. And, of course, these kinds of exercises can bring a bit of a renewed energy as everybody then is on the same page. They see an alignment to move forward. Now, this is just one of the types of interventions that we do. We receive a lot of requests from member states, especially now we have a list. I think it’s the latest count is 46 countries that have requested some sort of support from IT in terms of cybersecurity. We work with them in terms of establishing or enhancing a national cert, developing or updating national cybersecurity strategies. We do quite a few different tailored trainings around topics from everything to try to bolster the number of women in cybersecurity, to topics around child online protection, critical infrastructure, of course. We also try to do a lot of train the trainer programs because our ultimate goal is to build local capacity. We’re not that big of a UN agency and our team is small within that. And I I think one of the things that we very much recognize, and the reason I like working with a lot of the people in this room is there’s a mutual recognition of you have to work together, but you also have to make sure that the country itself that you’re helping is empowered to start on their own journey. They need to be owning the process going forward. It won’t be ITU doing the cybersecurity of a country, it will be the country doing it. And we need to then look at things, what we do, how can we actually then make sure that we’re developing practices for the country that can build that trust between stakeholders as trust is particularly vulnerable when there are political or economic challenges. And with this, I do wanna take a side note to just say, this is not a developing developed country issue. Many of the issues that developing countries are facing are ones that developed countries are facing. Are you being agile? Do you have the right people in the right places? Are the stakeholders actually coordinating? And for least developed countries, suddenly they had the extra added issue and small island developing states I’d like to add to this, in that they lack the human capacity, let alone the technical tools. So as countries are facing these competing priorities, exercises can be a useful way to help identify where the areas for prioritization lie, where they can work more effectively together and where they should go next. Thanks.


Marie Humeau: You mentioned bridging the gap between the tech audience and the non-technical audience. So I’m going to move to my technical person on the panel. So Lars, you are actually kind of trying to bridge this gap also between the tech inside the company and the non-technical people, the operational. So, which is really crucial, but from your perspective in the energy sector, what does resilience look like in practice and how is it evolving? And based on your experience, what concrete action and processes help strengthening cyber resilience?


Lars Erik Smevold: Thank you. for having me on this panel, Marie, so I appreciate that a lot. What strikes me in these discussions that we are sitting here is that availability, that is definitely part of the front of our heads, because we are running a critical infrastructure like hydropower plants, solar, wind, batteries, grid stabilizers, everything that keeps electricity grids in different countries around the globe up and running. And for us, the resilience part is kind of like the, we need to introduce the ability to anticipate, prepare for, respond to, recover from, and learn from disruptions that happens. And to make these happen, we need to have the people in the sharp end. They need to get a better understanding, together with the operations and also the managers and the policy makers, at least, to actually, how can we make these operationalized? The processes are very good, the policies are good, but we need to adapt and keep in mind that security and cybersecurity, we are actually adapting into cyber physical systems. And these cyber physical systems, they need to be taken good care of. And it’s not like we can put any type of security measures into any type of system, because that system will affect another type of system that can get consequences and impacts you maybe don’t want to have. So you need to build a better understanding of what you actually try to achieve. So for us, it’s like the resiliency part is, for our site, is a lot of physical, what kind of spare parts do we have stored in case of emergency? Wind and weather, we are… Highly educated and trained to handle We work to handle a lot of the cyber security part attacks and understanding From our part we have actually done drills the last couple of years Directly to our power stations and the people outside there and they are loved that we actually came down to them talk to them Make us understand how they day by day work and life is and Also how that will affect them and their family if a cyber attack happens And one thing is the cyber attack in itself But if that is combined with other type of physical attacks at the same time, how do we? handle that and How do we together with the national security authorities? The regulators for our sector How do we work together to actually achieve? our end goal to actually keep the availability of these The critical infrastructure that we actually are working on So for our parties also At the same time we also need to adapt to the the climate changes that we already have felt And work close together with these the other authorities both in the Norwegian countries And the Nordics because the electricity grids Both in the Nordics and Europe. We are highly connected and We need to build that understanding Also from experience back in 2015-2016 the Nordic TSOs transmission system operators that are responsible for the highways in the electricity grids in each and every country Actually did drills Together to actually see what affected us together with the national security The National Regulators, and also with the different CERT teams in these countries. And what we actually achieved from that type of exercise was actually the better understanding what is needed of knowledge, and not only for the cyber security and IT, but you also need a good understanding from each and every type of, from electricity, from telecoms, from water and water sewage, and other critical infrastructure that are in this mixture, to actually do the right decisions at the right time. So from my perspective, it’s definitely the go together, collaborate, and then make the people in the sharp end able to do their work and get a better understanding. So I think that’s good for now.


Marie Humeau: Thank you very much, Lars. So the time is flying fast, because we have a lot to say. And I would like to jump, actually, based on your point on the importance of talking, working together, cross-sectoral, cross-regional, between the authorities, at national level, at regional level, you mentioned as well, Floretta, I would like to look at cooperation frameworks and path ahead. So for this, I will give you a bit of a shorter time, so we can also have a bit of time for a question. But I will start with you, Timea, online. So you mentioned the challenges of the private sector to protect critical infrastructure. What do support you would need from policymakers? And also, why do you think the business should care about discussion that are happening at the international level, in international fora, such as the UN? And please keep it short, so we can have time for questions from the audience. Thanks.


Ms. Timea Suto: Thanks, Marie. I’ll try to be brief. Really, for business protecting critical infrastructure today, It is increasingly difficult and not because of a lack of willingness, but because of the complexity and fragmentation that surrounds this. So we have challenges like many of the essential services we rely on today not being originally conceived as critical, so not designed to operate with the resilience and security that we now require. At the same time, these infrastructures are highly interdependent, not just with each other, but with suppliers, contractors, and digital service providers who might not themselves be classified as critical. Then we have a huge issue with fragmentation, not a shared global understanding of what constitutes critical infrastructure, with definitions and legal frameworks differing widely between countries, and in some cases missing altogether. And then there’s the question of maturity of critical infrastructure operators that vary enormously from those companies that can’t have the resources to invest in advanced security measures to those, especially SMEs, who lack the tools, funding, and expertise, but they are just as critical in the supply chains. So how do we ensure security for essential services without overburdening the companies that we actually rely on to operate and innovate them? I won’t talk about what the private sector could do. Please read the report that I posted in the chat. We say a lot about that. But I focus on the policy makers, as that’s what you asked about, Marie. And there, I have a very short answer. It’s not more regulation, but smarter policy. Focus less on control and more on creating the right incentives for cybersecurity investment. There’s also a need to rebalance responsibility between the private and public sectors. Governments must recognize that security for socially critical infrastructure is not solely a private burden, particularly when that infrastructure is necessary for public well-being, national security, and economic stability. Instead of defaulting to new regulatory obligations, we need public investment, fiscal support, and policy environments that enable this. So there’s one line that I’d like to leave you with today is this, if we want effective cybersecurity outcomes, we need inclusive policymaking processes. I hope I was brief enough, Marie.


Marie Humeau: Thank you. I think you point out to all the complexity and challenges. I guess there are also some challenges within the technical community. So Lars, maybe you can tell us a bit more about how the technical community cooperate together. and how the cross at cross sectoral level as well, and also international level. But also, from your perspective, should the technical community engage more with the diplomats? I think you pointed out you started pointing it out. But if you can dig it a bit further, that would be great. And also, how can the industry better engage or have an incentive to engage actually, in those multilateral processes, where governments are sitting and discussing the protection of critical infrastructure?


Lars Erik Smevold: Yeah, from my perspective, and our perspective. It’s definitely important to collaborate more with the diplomats and diplomacy to get a better common understanding of what’s actually needed and what type of resources are needed and how much time, things, and what it actually takes to do. So to have some arenas that we can actually meet, talk, not that formal in a way, I will say, because that makes it more easier and comfortable to speak out in a better way. Today I brought out my white shirt. I tried to adapt to Floreta. I think that is a start. And maybe sometimes I will shortly invite Floreta and others to be part on a trip for our sake. There are some of our, maybe some plans or some that are available, and talk to our specialists and technicians, because that will definitely help you and others to understand at the same time the other way around. What is your work going on? What can we help you with on your way? Because, as was mentioned before, the arenas that we can actually meet and get a better actual understanding of what critical infrastructure are, is very, very important. Because sometimes there are so on a high level discussions. So the people down, sorry, on the ground, they do not feel any, does that hit me actually, or does it? So the right arenas, cross-sectional with the diplomats, but also internally in the countries, cross-sectoral. General Weiss, and also over borders, because in the electricity community we have the NSOE in Europe, the interest group for TSOs, but we also have SIGRE, that is a global interest organization, that also have cyber security on topic. But these different arenas, maybe we sometimes from the cyber security technical perspective can go to these arenas and talk more, and the same from the diplomacy and IT community also. Do we get a better understanding of electricity, water, and other types?


Marie Humeau: Thank you very much, Lars. And thankfully I have a white and blue shirt, so I’m not sitting in between the two of you. And also I’m wearing sneakers, you can’t see, but I’m not that formal. So I think one of the important things is exactly this, that one understands the other, but it’s not only for one side to come to the diplomatic arena, it’s also for the diplomats to concretely understand what your needs are and how you operate on a daily basis. And actually to create this environment of trust and to be down to earth. Pavel, I’m going to jump to you to maybe look at how the UN framework can actually be more practical and to protect critical infrastructure. How can we actually follow what just Lars said and be more practical and down to earth and to better understand each other to make sure that we create this trusted environment? Pavel, over to you.


Pavel Mraz: Thank you so much, Marie. The UN framework for responsible state behavior in cyberspace, it has been mentioned by Florida, it has been mentioned by Caroline. It does provide a strong foundation for protecting critical infrastructure. At the core of this framework are agreed voluntary cyber norms, something that all states have committed to do, notably norm F, which affirms that states should not conduct or support any ICT activity that intentionally damages It’s Practically Implemented. And some things that are currently being done at the UN and global level is countries are designating points of contacts globally for crisis communication in recognition that you cannot exchange business cards in a hurricane when a real cyber crisis hits, and you need assistance from abroad, whether it’s assistance from the private sector or another member states if the malicious activity is emanating from outside of your own territory. You need to have all these channels, the trust, and the network already in place to know where to reach out. Of course, there is another challenge here, and that is when we do capacity building in developing countries, we often see this mindset of cybersecurity being an IT department problem or national cybersecurity agency problem. And here is where the tabletop exercises simulating real crisis really come into focus because bringing in all the decision makers and demonstrating that when critical services are down, whether it’s energy, water or health care, it is far broader as a problem than a problem for a national cybersecurity agency. So that really helps bring people together, as Carolyn said, and we have seen this on the ground. So in order for the UN framework to have a real world impact and not remain just on paper, it must be operationalized nationally through legislation, institutional coordination, but also sustained investment in cybersecurity that needs to be supported not only by the technical community. but also by the political decision makers in a country. It must be inclusive, involving also technical experts, civil society and the private sector, in order words, all the stakeholders that have a role to play in protecting critical infrastructure. And of course it should be backed by practical capacity-building. I will leave it at that, in the interest of time and


Marie Humeau: over back to you. Thank you. So I think, Floretta, I will give you the floor and I would like to keep a few minutes for questions, if there are any, and also for Akil at the end to wrap up all those information that we gathered. But you are the perfect link between the diplomat and the technical. You’re a diplomat, you’re sitting in the technical organization, you’ve been part of the UN discussion, you’re also part of the Women in Cyber Fellowship. Maybe you can give us a bit of, very quickly, your view on how to bridge those different communities and how to ensure that each community understands and engages with one another.


Floreta Faber: As it was said here, it is absolutely crucial that those communities talk to each other. As I mentioned, Albania has taken a number of reforms on trying to bring the best what you can do in a country, in the cyber ecosystem, in order to reach the best results. Unfortunately, only the countries that have had big attacks, kind of, have learned the lesson. But as we always try to say in cyber security, it’s like in a football match. You can be the best team in the world, you always try and make the training in order, when there is a game, you don’t have a goal. But sometimes, even if you are the best and you have the best players, you still have the goal from the other side. So it’s the same on cyber security. You prepare, you believe you have the best team in protecting you, but sometimes, you know, there are circumstances when the attacks can hit you. So this is the moment. where we all train, when we all talk in a peacetime, when there is not a hurricane, in order to be responsive. That’s why those communities need to talk to each other, because the crisis can be internal to that organization, which can be big. It can spill out in the society, but it can also become an international issue. And especially when it becomes an international issue, it’s the diplomatic community who do the talks. Now, the UN is one of the best examples, and OSCE and other organizations, that can bring together always diplomat and technical communities. And that’s actually one way to talk to each other. There are fellowships, like the Women in Cyber Fellowship, which I have been part of, but there was a UN Singapore Fellowship and other numerous fellowships supported from the UN, where you see those communities be together for one week, for two weeks, on the same room, that obviously you kind of start to build that trust on talking to each other. The point of contact directory, the UN-based, it’s another step how countries talk to each other. But on the daily basis, as you said, it’s really important that we all speak with a critical and important infrastructure. We maybe have the luxury of being a small country. We’re going to have over 200 critical and important infrastructures. In some countries, there are a few thousand. But we all have to find a way, either through clusters or through sectors, that they talk to each other, they talk with the national authority on cybersecurity, and they understand why it’s important that not only local, but also international connections are very important. We have put together a new strategy on cybersecurity, which is also one of the sub-laws which need to be passed in a matter of a week or two. And in Albania, there are two main points where we focus, supporting the critical and important infrastructure, and also awareness and support for children being safe online, but awareness to every level of society, underrepresented groups, SMEs, you know, all groups who otherwise do not hear about cybersecurity. But one of the five pillars of the strategy, it is the international cooperation. In some countries, international cooperation is important because we do not have the means and the opportunities and the money to really invest in cybersecurity, and the international support is very important in this case. But we also need the international support because we need to be connected. It is a world where we need to speak freely to each other, and when it comes to cybersecurity, there is no border. You know, the attack can have an effect in one country, go to other countries. You know, it can be a European or, I don’t know, a U.S. organization or a company who have branches around a number of countries, and one hit can hit, you know, several countries all in once. So that’s why it’s important. Another thing we have tried is exactly this, bring an experienced diplomat inside a technical organization. It was for me to understand first, what would I do in an organization like this if I come with, you know, at least two years of experience working on cyber diplomacy? But now I understand that maybe this is… Singapore has it as an example. They have a team which works… which have one leadership but two groups, one with the Ministry of Foreign Affairs or Communication, as they call it, and one with a technical group, understanding that there should be a very strong link between the organizations. We kind of started doing this, and it works perfectly because the translation is very important with the internationals, with the diplomatic community. But also everything the technical groups have done you translate it in the way you presented to your bosses to the government to the prime minister to people who want to know what happened because if you go too technical they want it’s normally you know they it’s a it’s a different language but the point is people need to understand in their own language what has going on and how they should be prepared. So this link is very important and I believe every country one way or another is trying to take steps in this direction.


Marie Humeau: Thank you Floreta. So Caroline I give you the floor for one minute and then I will keep two minutes for a question from the audience here and then two minutes to Akil for to wrap up. But Floreta you mentioned international cooperation is key so maybe Caroline very shortly you can you can share some of maybe some cooperation models that have been proven to be very effective that could be like basis for best practices and and how to can it like provide some some ideas for future discussion in the UN. Thanks and yeah for the sake of time I won’t


Caroline Troein: share stories from we did a tabletop exercise with UNIDIR and UNODA for the point of contacts directory that Pavel mentioned. I’ll just summarize and say often felt like the technical and diplomatic contacts were operating from completely different playbooks. So more coordination is definitely needed here and I want to note that we want we should note that coordination needs to happen at the national regional and global levels because that a lot of coordination efforts are either concentrated on the diplomatic or the technical levels and we need those cross-cutting aspects. So to just quickly mention a few models of course there’s the ASEAN certain maturity framework, MISA, OAS is a very successful model, OIC, they’re all driving coordination.


Marie Humeau: Thank you very much, Caroline. So I just want to check with the audience if there is a very burning question. If not, I do have one, but in the sake of time, yes, please.


Participant: Hello, my name is Eirik, I work for the IT company owned by the Church of Norway. Just interested in sharing more sensitive data across borders, because when you are a technical person, you sometimes get technical information. You don’t necessarily want to go public, but you still want to share it with other technical people so that they can defend their systems better. How can we make arrangements for that?


Floreta Faber: This is part of building trust with the people you work with. And in the Western Balkans, there is a region where the technical communities, different ways and different formats, try to be in contact with each other, either starting with WhatsApp, with groups of emails, with the platforms that we’re using to share weekly the information. We are also trying another way. It’s a long-term investment, we believe. We have started a cyber camp of young people in the region. And we are building an alumni group of people who go on cyber security. So, for the first time, they met when they were 20, 21. And we believe that in each country, since they come together on the same cyber camps every year, they still meet in alumni group, who was the first year, the second year. For the first time, we did the alumni last year online. We’re going to do this in person. And we try to build the trust, really, from the young age, because we believe those are things which take time. And sometimes they prevent you not talking to each other for different trust reasons, which are not only cyber security. And in order to overcome those, we’re trying all the best way possible, practically, how to really build the communities regionally all together. Very good.


Marie Humeau: Okay, great question. I think we could now talk about this for 20 minutes. I think Lars was willing to answer. But I will give like 30 seconds, nearly one minute, but I think we are cut short of time. But like very, very briefly, Akhil, if you can wrap up the entire hour of discussion that we had. Thank you. And you have the last word.


Mr. Akhil Thomas: Thank you, Marie. Well, as you said, I got the last word, which is a slightly unfair advantage of going last, which means that I get to sound smart by summarizing all the great points that were shared here. So let me try to do justice to that in just two minutes. Well, firstly, thank you to our panelists and participants, both on site and online. Key takeaways from today’s session underscore that collaboration is non-negotiable, whether it’s bridging diplomatic technical divides, strengthening cert-to-cert cooperation, or fostering public-private partnerships, silos are a luxury. We heard from Floreta that resilience is both a mindset and a systemic effort, rooted in governance, funding, and international collaboration. Lars highlighted the energy sector’s resilience on cross-border teamwork, where regular drills and shared awareness are vital. Kimia reminded us that while the private sector is innovating with zero trust and threat intelligence, what’s needed now to reduce fragmentation is smarter policy, not necessarily more regulation. Caroline emphasized the ITU’s role in building third capacity through cyber drills, peer learning, and stressing that resilience requires legal mandates and cross-cutting coordination at all levels, national to global. And Pavel mapped the alarming scale of threats from ransomware to space infrastructure and the urgent need to turn UN norms into action through practical tools like crisis exercises, POCs, and inclusive capacity building. Three themes came through very clearly. Preparation through exercises, clear protocols, and strong leadership. Inclusivity, making sure that governments, industry, and civil society all have a seat at the table. And shared responsibility, recognizing that threats cascade across borders and no single actor can secure critical infrastructure alone. As we conclude, I encourage everyone to carry forward today’s calls to action, concrete partnerships, actionable frameworks, and sustained dialogue. Thank you again for your insights and wishing you all a meaningful and productive time at IGF. Over to you, Marie.


Marie Humeau: Thank you. I’m just like closing. So thank you very much. We are running out of time. It has been very, I would like to thank the panelists and I’ll give the floor back to the next panel.


P

Pavel Mraz

Speech speed

197 words per minute

Speech length

1301 words

Speech time

394 seconds

Nearly 40% of state cyber operations target critical infrastructure including energy, healthcare, finance, water, and telecommunications

Explanation

Pavel Mraz highlighted that critical infrastructure has become both an attractive target for financially motivated actors and a strategic target for state-affiliated actors. This represents a significant portion of documented cyber operations by states in 2024.


Evidence

UNIDIR research report summarizing main threats of 2024 in cyberspace shows nearly 40% of all documented cyber operations by states focused on critical infrastructure sectors


Major discussion point

Current Cyber Threat Landscape for Critical Infrastructure


Topics

Cybersecurity


Agreed with

– Ms. Timea Suto

Agreed on

Critical infrastructure faces increasingly complex and diverse threats


Ransomware attacks surged by 275% with global financial losses exceeding $10 trillion, making cybercrime equivalent to the world’s third largest economy

Explanation

Pavel Mraz presented alarming statistics showing a massive surge in ransomware attacks and their economic impact. He used the comparison to national economies to illustrate the scale of cybercrime’s financial impact globally.


Evidence

275% surge in ransomware attacks in 2024, global financial losses from cybercrime disruptions exceeded US$10 trillion, making cybercrime equivalent to the third world’s largest economy by GDP


Major discussion point

Current Cyber Threat Landscape for Critical Infrastructure


Topics

Cybersecurity | Economic


Supply chain attacks are becoming more prominent, leveraging “target one, compromise many” principle to reach downstream customers

Explanation

Pavel Mraz explained how malicious cyber actors are increasingly using supply chain attacks as an efficient method to target multiple victims. This approach allows attackers to compromise many organizations by targeting a single point in the supply chain.


Evidence

Attacks on digital supply chains leveraging the principle of ‘target one, compromise many’ to target downstream customers, including critical infrastructure operators


Major discussion point

Current Cyber Threat Landscape for Critical Infrastructure


Topics

Cybersecurity


UN framework provides foundation through voluntary cyber norms, particularly norm F prohibiting attacks on critical infrastructure

Explanation

Pavel Mraz outlined how the UN framework for responsible state behavior in cyberspace provides a strong foundation for protecting critical infrastructure. He specifically mentioned norm F which commits states not to conduct or support ICT activities that intentionally damage critical infrastructure.


Evidence

UN framework includes agreed voluntary cyber norms, notably norm F which affirms that states should not conduct or support any ICT activity that intentionally damages critical infrastructure


Major discussion point

International Cooperation Frameworks


Topics

Cybersecurity | Legal and regulatory


Countries are designating points of contact for crisis communication, recognizing need for pre-established trust and networks

Explanation

Pavel Mraz emphasized the importance of having communication channels and trust networks established before a crisis occurs. He noted that countries cannot exchange business cards during a cyber hurricane and need assistance channels ready in advance.


Evidence

Countries are designating points of contacts globally for crisis communication, recognizing that you cannot exchange business cards in a hurricane when a real cyber crisis hits


Major discussion point

International Cooperation Frameworks


Topics

Cybersecurity


Agreed with

– Floreta Faber
– Caroline Troein

Agreed on

Trust-building is essential for effective information sharing and cooperation


Tabletop exercises help demonstrate that critical infrastructure attacks are broader problems than just IT department issues

Explanation

Pavel Mraz explained how tabletop exercises are effective in showing decision makers that when critical services like energy, water or healthcare are down, the problem extends far beyond what a national cybersecurity agency can handle alone. This helps bring different stakeholders together.


Evidence

Tabletop exercises simulating real crisis help bring decision makers together by demonstrating that when critical services are down, it is far broader as a problem than a problem for a national cybersecurity agency


Major discussion point

International Cooperation Frameworks


Topics

Cybersecurity | Development


Agreed with

– Floreta Faber
– Caroline Troein

Agreed on

Capacity building and training are fundamental to cybersecurity resilience


International cooperation must be operationalized through legislation, institutional coordination, and sustained investment

Explanation

Pavel Mraz argued that for the UN framework to have real-world impact and not remain just on paper, it must be implemented practically at the national level. This requires comprehensive approaches involving multiple stakeholders and sustained commitment.


Evidence

UN framework must be operationalized nationally through legislation, institutional coordination, sustained investment in cybersecurity, and must be inclusive involving technical experts, civil society and private sector


Major discussion point

International Cooperation Frameworks


Topics

Cybersecurity | Legal and regulatory | Development


Agreed with

– Ms. Timea Suto
– Floreta Faber
– Caroline Troein
– Lars Erik Smevold

Agreed on

Multi-stakeholder collaboration is essential for cybersecurity


M

Ms. Timea Suto

Speech speed

157 words per minute

Speech length

1166 words

Speech time

444 seconds

Critical infrastructure faces threats from state-nexus actors, organized cybercriminal ecosystems, and insider threats from employees or contractors

Explanation

Timea Suto outlined the diverse threat landscape facing critical infrastructure, categorizing threats into three main types. She explained how each type has different motivations and capabilities, from well-funded government-supported APTs to organized criminal groups and internal threats from people with privileged access.


Evidence

State nexus threat actors (APTs) are well-funded and capable of long-term complex operations; cybercriminal ecosystems are globally distributed and resilient; insider threats include malicious or negligent employees and contractors with privileged access


Major discussion point

Current Cyber Threat Landscape for Critical Infrastructure


Topics

Cybersecurity


Agreed with

– Pavel Mraz

Agreed on

Critical infrastructure faces increasingly complex and diverse threats


Private sector is investing in zero-trust architectures, vulnerability management, supply chain assessments, and incident response plans

Explanation

Timea Suto described how the private sector is actively responding to cyber threats by making significant investments in cybersecurity resilience. She outlined various technical and procedural measures that companies are adopting to strengthen their defenses.


Evidence

Growing adoption of zero-trust architectures, continuous patching and vulnerability management, strong data backups, supply chain risk assessments, robust incident response plans, and embedding cybersecurity by design


Major discussion point

Private Sector Challenges and Needs


Topics

Cybersecurity | Economic


Even well-funded private entities cannot deter state-sponsored actors or dismantle global criminal networks alone

Explanation

Timea Suto emphasized the limitations of what private sector can achieve independently, regardless of their resources. She argued that cybersecurity for critical infrastructure is a shared responsibility that requires government involvement in addressing threats beyond private sector capabilities.


Evidence

Even the best-funded private entities cannot deter state-sponsored actors or take down global criminal networks on their own; cybersecurity is a shared responsibility between government and industry


Major discussion point

Private Sector Challenges and Needs


Topics

Cybersecurity


Agreed with

– Pavel Mraz
– Floreta Faber
– Caroline Troein
– Lars Erik Smevold

Agreed on

Multi-stakeholder collaboration is essential for cybersecurity


Industry needs smarter policy focused on incentives rather than more regulation, with rebalanced responsibility between private and public sectors

Explanation

Timea Suto advocated for a policy approach that emphasizes creating the right incentives for cybersecurity investment rather than imposing more regulatory burdens. She argued for rebalancing responsibilities, recognizing that security for socially critical infrastructure shouldn’t be solely a private burden.


Evidence

Focus less on control and more on creating right incentives for cybersecurity investment; governments must recognize that security for socially critical infrastructure is not solely a private burden; need public investment, fiscal support, and enabling policy environments


Major discussion point

Private Sector Challenges and Needs


Topics

Cybersecurity | Legal and regulatory | Economic


Disagreed with

– Floreta Faber

Disagreed on

Regulatory approach to private sector cybersecurity


Critical infrastructure protection requires inclusive policymaking processes involving all stakeholders

Explanation

Timea Suto concluded with the principle that effective cybersecurity outcomes require inclusive policymaking processes. She emphasized that all relevant stakeholders must be involved in developing policies for protecting critical infrastructure.


Evidence

If we want effective cybersecurity outcomes, we need inclusive policymaking processes


Major discussion point

Private Sector Challenges and Needs


Topics

Cybersecurity | Legal and regulatory


F

Floreta Faber

Speech speed

147 words per minute

Speech length

2226 words

Speech time

904 seconds

Albania’s 2022 cyber attack on e-government services revealed that cybersecurity is about mindset and involving all people, not just technology

Explanation

Floreta Faber shared Albania’s experience with a major cyber attack that targeted their extensive e-government services. She explained how this incident served as a wake-up call, revealing that cybersecurity success depends on changing organizational mindset and involving everyone, not just focusing on technological solutions.


Evidence

Albania had over 1,200 e-services with 95% of citizen services online when attacked in mid-2022; the attack was a wake-up call showing cybersecurity is about mindset and involving people from top management to simple employees


Major discussion point

National Experiences and Lessons Learned


Topics

Cybersecurity | Development


Cybersecurity requires investment in both technology and capacity building, with awareness training for all employees from top management to simple workers

Explanation

Floreta Faber emphasized that effective cybersecurity requires a dual approach combining technological investments with comprehensive human capacity building. She stressed that everyone in an organization needs proper training and awareness, as one mistake by any person can allow a simple attack to become a major incident.


Evidence

Investment needed in technology but capacity building is also important for training people; people who are not technical need the right mindset and awareness that even one mistake by one person can allow a simple attack to become a big incident


Major discussion point

National Experiences and Lessons Learned


Topics

Cybersecurity | Development


Agreed with

– Pavel Mraz
– Caroline Troein

Agreed on

Capacity building and training are fundamental to cybersecurity resilience


Albania increased cybersecurity authority staff from 20 to 85 people and expanded critical infrastructure list by 50% following attacks

Explanation

Floreta Faber detailed the concrete organizational and regulatory changes Albania made in response to cyber attacks. These changes included significant expansion of human resources and updating their approach to identifying critical infrastructure according to new EU directives.


Evidence

Authority staff increased from about 20 people to 85 people; new law on cybersecurity in 2024 according to NIS2 directive; critical infrastructure list increased by 50% with new methodology


Major discussion point

National Experiences and Lessons Learned


Topics

Cybersecurity | Legal and regulatory | Development


Disagreed with

– Ms. Timea Suto

Disagreed on

Regulatory approach to private sector cybersecurity


Regular cyber drills help build understanding between stakeholders and create trust for sharing sensitive information

Explanation

Floreta Faber explained how Albania conducts regular exercises and drills to improve coordination between different stakeholders. She emphasized that these activities help build the trust necessary for effective information sharing and collaborative response to cyber incidents.


Evidence

Albania has had over 80 attempts and 32 attacks that became incidents last year, dealing with all cases successfully; regular exercises help build trust between technical teams and different organizations


Major discussion point

National Experiences and Lessons Learned


Topics

Cybersecurity | Development


Bringing experienced diplomats into technical organizations creates important translation between communities

Explanation

Floreta Faber shared her personal experience as a diplomat working within a technical cybersecurity organization. She explained how this arrangement helps bridge the gap between diplomatic and technical communities by providing necessary translation and communication between different audiences.


Evidence

Singapore has a team with one leadership but two groups, one with Ministry of Foreign Affairs and one technical group; bringing experienced diplomat inside technical organization helps translate between communities and present technical work to government leaders


Major discussion point

Bridging Technical and Diplomatic Communities


Topics

Cybersecurity


Agreed with

– Pavel Mraz
– Ms. Timea Suto
– Caroline Troein
– Lars Erik Smevold

Agreed on

Multi-stakeholder collaboration is essential for cybersecurity


Building trust requires long-term investment including regional cooperation and youth engagement through cyber camps

Explanation

Floreta Faber described Albania’s approach to building long-term trust and cooperation in the Western Balkans region. She explained their strategy of investing in youth through cyber camps to create lasting professional relationships and trust networks that will benefit future cybersecurity cooperation.


Evidence

Western Balkans technical communities stay in contact through WhatsApp groups, emails, and platforms for weekly information sharing; cyber camp for young people creates alumni groups who meet annually to build trust from young age


Major discussion point

Bridging Technical and Diplomatic Communities


Topics

Cybersecurity | Development


Agreed with

– Pavel Mraz
– Caroline Troein

Agreed on

Trust-building is essential for effective information sharing and cooperation


Regional cooperation can start with informal communication channels like WhatsApp groups and email platforms for weekly information sharing

Explanation

Floreta Faber provided practical examples of how technical communities can begin sharing sensitive threat information across borders. She described informal but effective communication methods that help build trust and enable regular information exchange between cybersecurity professionals.


Evidence

Western Balkans technical communities use WhatsApp groups, email groups, and platforms for sharing weekly information; Albania sends information every Friday that can be made public and shared with other CERTs


Major discussion point

Practical Information Sharing Challenges


Topics

Cybersecurity


Trust-building requires sustained engagement and can be developed through alumni networks of cybersecurity professionals

Explanation

Floreta Faber explained their long-term strategy for building professional trust networks through sustained engagement programs. She described how creating alumni networks of cybersecurity professionals who first meet at young ages can overcome political and trust barriers that might otherwise prevent cooperation.


Evidence

Cyber camp alumni groups where people first meet at age 20-21 and continue meeting annually; first alumni meeting was online, planning in-person meetings; trying to build trust from young age because trust-building takes time


Major discussion point

Practical Information Sharing Challenges


Topics

Cybersecurity | Development


C

Caroline Troein

Speech speed

161 words per minute

Speech length

1079 words

Speech time

400 seconds

Countries now have more cybersecurity measures than ever before including laws, technical capabilities, strategies, and training programs

Explanation

Caroline Troein provided a positive perspective on global cybersecurity progress, citing ITU’s Global Cybersecurity Index findings. She noted that while challenges are increasing, countries are also implementing more comprehensive cybersecurity measures across multiple dimensions.


Evidence

According to ITU’s Global Cybersecurity Index, countries have more cybersecurity measures in place than ever before including more laws, technical capabilities, strategies, trainings, and cooperation


Major discussion point

Role of Capacity Building and Technical Cooperation


Topics

Cybersecurity | Development | Legal and regulatory


National CERTs serve as the first line of defense and need legal mandate, operational structures, sustainable funding, and continuous training

Explanation

Caroline Troein emphasized the foundational role of national CERTs in cyber resilience, explaining that they serve as the primary defense against ICT threats targeting critical infrastructure. She outlined the essential requirements for effective CERT operations beyond just technical capabilities.


Evidence

National CERTs are foundational to cyber resilience as first line of defense; they need legal mandate, clear operational structures, sustainable funding, and continuous training; core incident response responsibilities remain with CERTs even as countries develop cybersecurity agencies


Major discussion point

Role of Capacity Building and Technical Cooperation


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Pavel Mraz
– Floreta Faber

Agreed on

Capacity building and training are fundamental to cybersecurity resilience


Cyber exercises simulate real-world attacks, test response mechanisms, and foster cross-sectoral coordination while bridging technical and non-technical communities

Explanation

Caroline Troein explained the multiple benefits of cyber exercises as capacity building tools. She emphasized how these exercises serve not only to test technical responses but also to build understanding and coordination between different stakeholder communities.


Evidence

Cyber drills simulate real world attacks, test national response mechanisms, foster cross-sectoral coordination, and help bridge the gap between technical and non-technical communities; exercises help build trust and show stakeholders their connections and dependencies


Major discussion point

Role of Capacity Building and Technical Cooperation


Topics

Cybersecurity | Development


Agreed with

– Pavel Mraz
– Floreta Faber

Agreed on

Trust-building is essential for effective information sharing and cooperation


ITU receives requests from 46 countries for cybersecurity support including CERT establishment, strategy development, and specialized training

Explanation

Caroline Troein provided concrete evidence of global demand for cybersecurity capacity building by citing the number of countries requesting ITU support. She outlined the diverse types of assistance requested, from institutional development to specialized training programs.


Evidence

46 countries have requested support from ITU in cybersecurity; support includes establishing or enhancing national CERTs, developing or updating national cybersecurity strategies, tailored trainings on various topics, and train-the-trainer programs


Major discussion point

Role of Capacity Building and Technical Cooperation


Topics

Cybersecurity | Development


Coordination needs to happen at national, regional, and global levels with cross-cutting aspects between diplomatic and technical levels

Explanation

Caroline Troein emphasized the multi-level nature of coordination required for effective cybersecurity. She noted that coordination efforts are often concentrated on either diplomatic or technical levels, but what’s needed are approaches that cut across both dimensions at all levels.


Evidence

Coordination needs to happen at national, regional and global levels; coordination efforts are often concentrated on either diplomatic or technical levels and we need cross-cutting aspects; mentioned ASEAN, MISA, OAS, OIC as successful coordination models


Major discussion point

Practical Information Sharing Challenges


Topics

Cybersecurity


Agreed with

– Pavel Mraz
– Ms. Timea Suto
– Floreta Faber
– Lars Erik Smevold

Agreed on

Multi-stakeholder collaboration is essential for cybersecurity


L

Lars Erik Smevold

Speech speed

134 words per minute

Speech length

981 words

Speech time

437 seconds

Energy sector resilience requires ability to anticipate, prepare for, respond to, recover from, and learn from disruptions

Explanation

Lars Erik Smevold defined resilience in the energy sector as a comprehensive capability that goes beyond just prevention to include full cycle management of disruptions. He emphasized that this requires involving people at all levels from operations to management and policy makers.


Evidence

Running critical infrastructure like hydropower plants, solar, wind, batteries, grid stabilizers; resilience requires involving people in the sharp end, operations, managers, and policy makers to operationalize security measures


Major discussion point

Operational Resilience in Critical Sectors


Topics

Cybersecurity | Infrastructure


Cybersecurity must be adapted to cyber-physical systems where security measures on one system can affect other interconnected systems

Explanation

Lars Erik Smevold explained the complexity of securing cyber-physical systems in critical infrastructure, where traditional security measures may not be appropriate. He emphasized the need to understand system interdependencies and potential unintended consequences of security implementations.


Evidence

Security and cybersecurity are adapting into cyber physical systems; cannot put any type of security measures into any type of system because that system will affect another type of system with consequences you may not want


Major discussion point

Operational Resilience in Critical Sectors


Topics

Cybersecurity | Infrastructure


Cross-border electricity grid connections require coordinated response between Nordic and European transmission system operators

Explanation

Lars Erik Smevold highlighted the interconnected nature of electricity grids across borders and the need for coordinated cybersecurity responses. He provided specific examples of successful cooperation between Nordic transmission system operators and the importance of understanding climate change impacts.


Evidence

Electricity grids in Nordics and Europe are highly connected; Nordic TSOs did drills in 2015-2016 together with national security, regulators, and CERT teams; need to adapt to climate changes and work with other authorities


Major discussion point

Operational Resilience in Critical Sectors


Topics

Cybersecurity | Infrastructure


Technical specialists need better understanding of different critical infrastructure sectors including electricity, telecoms, and water systems

Explanation

Lars Erik Smevold emphasized the importance of cross-sectoral knowledge among technical specialists to make proper decisions during incidents. He argued that cybersecurity professionals need understanding beyond just IT and cybersecurity to include knowledge of various critical infrastructure sectors.


Evidence

Need good understanding not only of cybersecurity and IT, but also from electricity, telecoms, water and sewage, and other critical infrastructure in the mixture to make right decisions at the right time


Major discussion point

Operational Resilience in Critical Sectors


Topics

Cybersecurity | Infrastructure


Technical and diplomatic communities need informal arenas to meet and build understanding of each other’s work and resource needs

Explanation

Lars Erik Smevold advocated for creating informal meeting opportunities between technical and diplomatic communities to build mutual understanding. He suggested that less formal settings make it easier and more comfortable for both sides to communicate effectively.


Evidence

Important to collaborate more with diplomats to get common understanding of what’s needed, what resources are needed, and how much time things take; need arenas that are not too formal to make it easier and comfortable to speak


Major discussion point

Bridging Technical and Diplomatic Communities


Topics

Cybersecurity


Agreed with

– Pavel Mraz
– Ms. Timea Suto
– Floreta Faber
– Caroline Troein

Agreed on

Multi-stakeholder collaboration is essential for cybersecurity


Cross-visits between technical facilities and diplomatic offices help build mutual understanding of operational realities

Explanation

Lars Erik Smevold suggested practical approaches for building understanding between communities, including site visits to technical facilities and diplomatic offices. He emphasized the importance of both sides understanding each other’s daily work and operational constraints.


Evidence

Suggested inviting diplomats to visit power plants and technical facilities to talk to specialists and technicians; also suggested technical people visit diplomatic offices to understand their work and how they can help each other


Major discussion point

Bridging Technical and Diplomatic Communities


Topics

Cybersecurity


P

Participant

Speech speed

115 words per minute

Speech length

71 words

Speech time

36 seconds

Technical professionals need secure channels to share sensitive threat information across borders without making it public

Explanation

A participant from the Church of Norway’s IT company raised a practical question about sharing sensitive technical information across borders. They highlighted the challenge technical professionals face when they have threat information that could help others defend their systems but cannot be shared publicly.


Evidence

Works for IT company owned by Church of Norway; interested in sharing sensitive technical data across borders that technical people don’t want to go public but want to share with other technical people for defense


Major discussion point

Practical Information Sharing Challenges


Topics

Cybersecurity


M

Mr. Akhil Thomas

Speech speed

166 words per minute

Speech length

313 words

Speech time

113 seconds

M

Marie Humeau

Speech speed

156 words per minute

Speech length

1686 words

Speech time

646 seconds

Agreements

Agreement points

Multi-stakeholder collaboration is essential for cybersecurity

Speakers

– Pavel Mraz
– Ms. Timea Suto
– Floreta Faber
– Caroline Troein
– Lars Erik Smevold

Arguments

International cooperation must be operationalized through legislation, institutional coordination, and sustained investment


Even well-funded private entities cannot deter state-sponsored actors or dismantle global criminal networks alone


Bringing experienced diplomats into technical organizations creates important translation between communities


Coordination needs to happen at national, regional, and global levels with cross-cutting aspects between diplomatic and technical levels


Technical and diplomatic communities need informal arenas to meet and build understanding of each other’s work and resource needs


Summary

All speakers emphasized that cybersecurity, especially for critical infrastructure, requires collaboration across sectors, borders, and communities. No single actor can address cyber threats alone.


Topics

Cybersecurity | Development


Capacity building and training are fundamental to cybersecurity resilience

Speakers

– Pavel Mraz
– Floreta Faber
– Caroline Troein

Arguments

Tabletop exercises help demonstrate that critical infrastructure attacks are broader problems than just IT department issues


Cybersecurity requires investment in both technology and capacity building, with awareness training for all employees from top management to simple workers


National CERTs serve as the first line of defense and need legal mandate, operational structures, sustainable funding, and continuous training


Summary

Speakers agreed that effective cybersecurity requires comprehensive capacity building that goes beyond technical training to include awareness at all organizational levels and practical exercises.


Topics

Cybersecurity | Development


Trust-building is essential for effective information sharing and cooperation

Speakers

– Pavel Mraz
– Floreta Faber
– Caroline Troein

Arguments

Countries are designating points of contact for crisis communication, recognizing need for pre-established trust and networks


Building trust requires long-term investment including regional cooperation and youth engagement through cyber camps


Cyber exercises simulate real-world attacks, test response mechanisms, and foster cross-sectoral coordination while bridging technical and non-technical communities


Summary

Speakers emphasized that trust must be built before crises occur and requires sustained investment in relationships and communication channels.


Topics

Cybersecurity | Development


Critical infrastructure faces increasingly complex and diverse threats

Speakers

– Pavel Mraz
– Ms. Timea Suto

Arguments

Nearly 40% of state cyber operations target critical infrastructure including energy, healthcare, finance, water, and telecommunications


Critical infrastructure faces threats from state-nexus actors, organized cybercriminal ecosystems, and insider threats from employees or contractors


Summary

Both speakers highlighted the severity and diversity of threats targeting critical infrastructure, including state actors, criminals, and insider threats.


Topics

Cybersecurity


Similar viewpoints

Both emphasized the need for balanced approaches to cybersecurity governance that involve appropriate resource allocation and smart policy rather than just regulatory burden.

Speakers

– Ms. Timea Suto
– Floreta Faber

Arguments

Industry needs smarter policy focused on incentives rather than more regulation, with rebalanced responsibility between private and public sectors


Albania increased cybersecurity authority staff from 20 to 85 people and expanded critical infrastructure list by 50% following attacks


Topics

Cybersecurity | Legal and regulatory | Economic


Both speakers emphasized the importance of regular exercises and cross-border coordination, drawing from their practical experience in managing critical infrastructure.

Speakers

– Lars Erik Smevold
– Floreta Faber

Arguments

Cross-border electricity grid connections require coordinated response between Nordic and European transmission system operators


Regular cyber drills help build understanding between stakeholders and create trust for sharing sensitive information


Topics

Cybersecurity | Infrastructure


Both speakers highlighted the global demand for cybersecurity capacity building and the effectiveness of practical exercises in building understanding across communities.

Speakers

– Caroline Troein
– Pavel Mraz

Arguments

ITU receives requests from 46 countries for cybersecurity support including CERT establishment, strategy development, and specialized training


Tabletop exercises help demonstrate that critical infrastructure attacks are broader problems than just IT department issues


Topics

Cybersecurity | Development


Unexpected consensus

Informal communication channels are as important as formal frameworks

Speakers

– Floreta Faber
– Lars Erik Smevold

Arguments

Regional cooperation can start with informal communication channels like WhatsApp groups and email platforms for weekly information sharing


Technical and diplomatic communities need informal arenas to meet and build understanding of each other’s work and resource needs


Explanation

It was unexpected to see both a diplomat and a technical expert emphasize the importance of informal communication channels like WhatsApp groups alongside formal diplomatic and technical frameworks. This suggests that practical, everyday communication tools are recognized as vital for cybersecurity cooperation.


Topics

Cybersecurity


Long-term youth engagement as a cybersecurity strategy

Speakers

– Floreta Faber

Arguments

Trust-building requires sustained engagement and can be developed through alumni networks of cybersecurity professionals


Explanation

The emphasis on building cybersecurity cooperation through youth engagement and alumni networks represents an unexpected long-term strategic approach that goes beyond traditional diplomatic or technical cooperation models.


Topics

Cybersecurity | Development


Overall assessment

Summary

The speakers demonstrated remarkable consensus on the need for multi-stakeholder collaboration, capacity building, trust-building, and the recognition that cyber threats to critical infrastructure are complex and require coordinated responses. There was strong agreement on the limitations of single-actor approaches and the importance of both formal and informal cooperation mechanisms.


Consensus level

High level of consensus with practical implications for cybersecurity policy. The agreement suggests that the cybersecurity community has matured in its understanding that technical solutions alone are insufficient, and that sustainable cybersecurity requires investment in human relationships, institutional cooperation, and long-term capacity building across all stakeholder groups.


Differences

Different viewpoints

Regulatory approach to private sector cybersecurity

Speakers

– Ms. Timea Suto
– Floreta Faber

Arguments

Industry needs smarter policy focused on incentives rather than more regulation, with rebalanced responsibility between private and public sectors


Albania increased cybersecurity authority staff from 20 to 85 people and expanded critical infrastructure list by 50% following attacks


Summary

Timea advocates for less regulation and more incentives for private sector, emphasizing that security shouldn’t be solely a private burden. Floreta’s experience shows Albania’s response involved significant regulatory expansion and increased government oversight of critical infrastructure.


Topics

Cybersecurity | Legal and regulatory | Economic


Unexpected differences

Role of government regulation in critical infrastructure protection

Speakers

– Ms. Timea Suto
– Floreta Faber

Arguments

Industry needs smarter policy focused on incentives rather than more regulation, with rebalanced responsibility between private and public sectors


Albania increased cybersecurity authority staff from 20 to 85 people and expanded critical infrastructure list by 50% following attacks


Explanation

This disagreement is unexpected because both speakers represent the need for stronger critical infrastructure protection, yet they have fundamentally different views on government’s role. Timea, from private sector perspective, argues against more regulation while Floreta’s practical experience led to significant regulatory expansion. This reveals a tension between private sector preferences and real-world government responses to cyber incidents.


Topics

Cybersecurity | Legal and regulatory | Economic


Overall assessment

Summary

The discussion showed remarkable consensus on the nature of threats and the need for cooperation, with limited but significant disagreement on regulatory approaches and implementation methods


Disagreement level

Low to moderate disagreement level. Most speakers agreed on fundamental challenges and goals, but differed on specific approaches to regulation and implementation. The main tension was between private sector preference for incentive-based policies versus government experience favoring regulatory expansion. This disagreement has significant implications as it reflects the ongoing global debate about how to balance private sector autonomy with government oversight in critical infrastructure protection.


Partial agreements

Partial agreements

Similar viewpoints

Both emphasized the need for balanced approaches to cybersecurity governance that involve appropriate resource allocation and smart policy rather than just regulatory burden.

Speakers

– Ms. Timea Suto
– Floreta Faber

Arguments

Industry needs smarter policy focused on incentives rather than more regulation, with rebalanced responsibility between private and public sectors


Albania increased cybersecurity authority staff from 20 to 85 people and expanded critical infrastructure list by 50% following attacks


Topics

Cybersecurity | Legal and regulatory | Economic


Both speakers emphasized the importance of regular exercises and cross-border coordination, drawing from their practical experience in managing critical infrastructure.

Speakers

– Lars Erik Smevold
– Floreta Faber

Arguments

Cross-border electricity grid connections require coordinated response between Nordic and European transmission system operators


Regular cyber drills help build understanding between stakeholders and create trust for sharing sensitive information


Topics

Cybersecurity | Infrastructure


Both speakers highlighted the global demand for cybersecurity capacity building and the effectiveness of practical exercises in building understanding across communities.

Speakers

– Caroline Troein
– Pavel Mraz

Arguments

ITU receives requests from 46 countries for cybersecurity support including CERT establishment, strategy development, and specialized training


Tabletop exercises help demonstrate that critical infrastructure attacks are broader problems than just IT department issues


Topics

Cybersecurity | Development


Takeaways

Key takeaways

Cybersecurity for critical infrastructure requires a multi-stakeholder approach involving governments, private sector, technical communities, and diplomatic communities working together


Cyber resilience is fundamentally about mindset and people, not just technology – requiring awareness and training from top management to individual employees


The threat landscape is escalating rapidly with nearly 40% of state cyber operations targeting critical infrastructure and ransomware attacks surging 275%


No single actor can secure critical infrastructure alone – shared responsibility between public and private sectors is essential


Trust-building between different communities (technical, diplomatic, operational) is crucial and requires sustained long-term investment


Practical cooperation mechanisms like cyber drills, tabletop exercises, and informal communication channels are vital for building operational resilience


International frameworks like UN cyber norms must be operationalized through national legislation, institutional coordination, and practical capacity building


Cross-border coordination is essential given the interconnected nature of critical infrastructure, especially in sectors like energy and telecommunications


Resolutions and action items

Countries should designate points of contact for crisis communication and establish pre-crisis trust networks


Technical and diplomatic communities need more informal meeting opportunities to build mutual understanding


Implementation of cross-visits between technical facilities and diplomatic offices to understand operational realities


Development of secure channels for sharing sensitive threat information across borders between technical professionals


Strengthening of regional cooperation through platforms like CERT-to-CERT information sharing


Investment in long-term trust-building initiatives including youth engagement through cyber camps and alumni networks


Translation of UN cyber norms into practical national frameworks with clear legal mandates and operational structures


Unresolved issues

How to effectively share sensitive technical threat information across borders while maintaining security


Balancing regulatory requirements with operational flexibility for private sector critical infrastructure operators


Addressing the fragmentation of critical infrastructure definitions and frameworks across different countries


Scaling cybersecurity capacity building to meet the needs of 46+ countries requesting ITU support


Ensuring adequate funding and resources for expanding cybersecurity authorities and capabilities


Managing the complexity of interdependent critical infrastructure systems where security measures on one system can affect others


Bridging the maturity gap between well-resourced critical infrastructure operators and smaller companies in supply chains


Suggested compromises

Focus on ‘smarter policy’ with incentives for cybersecurity investment rather than additional regulatory burdens


Rebalance responsibility between private and public sectors, with governments taking more active role in disrupting threat actors while private sector focuses on operational security


Use flexible frameworks and voluntary standards that allow companies to adapt quickly to emerging threats while meeting regulatory requirements


Implement inclusive policymaking processes that give all stakeholders a seat at the table rather than top-down regulatory approaches


Combine formal diplomatic channels with informal technical cooperation mechanisms to bridge different community cultures and working styles


Thought provoking comments

We understood that talking about cyber security it’s not talking about technology, it’s talking about a mindset, it’s talking involving more people from the top management to the simple employee inside every organization that cyber security is something everyone needs to focus on.

Speaker

Floreta Faber


Reason

This comment fundamentally reframes cybersecurity from a technical problem to a human and organizational challenge. It challenges the common perception that cybersecurity is solely an IT department responsibility and emphasizes the critical role of human factors and organizational culture.


Impact

This insight shifted the discussion from technical solutions to human-centered approaches. It influenced subsequent speakers to emphasize training, awareness, and cross-community collaboration. Caroline later built on this by discussing the importance of bridging technical and non-technical audiences, and Lars emphasized the need to make people ‘in the sharp end’ understand their role.


If cybercrime was a country measured by GDP, it would have, it would be the third world’s largest economy.

Speaker

Pavel Mraz


Reason

This striking analogy puts the scale of cyber threats into perspective by comparing cybercrime’s economic impact to national economies. It transforms abstract statistics into a concrete, relatable comparison that emphasizes the magnitude of the challenge.


Impact

This comment established the gravity of the threat landscape early in the discussion, setting a serious tone that influenced all subsequent contributions. It provided context for why the collaborative approaches discussed later are not just beneficial but absolutely necessary given the scale of the challenge.


It’s not more regulation, but smarter policy. Focus less on control and more on creating the right incentives for cybersecurity investment.

Speaker

Timea Suto


Reason

This comment challenges the conventional regulatory approach to cybersecurity and proposes a paradigm shift from compliance-based to incentive-based policy frameworks. It addresses a fundamental tension between government oversight and private sector innovation.


Impact

This insight introduced a nuanced policy perspective that moved the discussion beyond simple public-private cooperation to examining the quality and nature of policy interventions. It influenced the later discussion about the need for ‘inclusive policymaking processes’ and shaped the conversation about sustainable approaches to critical infrastructure protection.


You cannot exchange business cards in a hurricane when a real cyber crisis hits, and you need assistance from abroad… You need to have all these channels, the trust, and the network already in place to know where to reach out.

Speaker

Pavel Mraz


Reason

This vivid metaphor illustrates the critical importance of pre-established relationships and communication channels in crisis management. It emphasizes that crisis response preparation must happen during peacetime, not during emergencies.


Impact

This comment reinforced the importance of proactive relationship-building and influenced the discussion toward practical cooperation mechanisms. It connected with Floreta’s later emphasis on building trust ‘from a young age’ through initiatives like cyber camps, and supported the overall theme of sustained, long-term collaboration rather than ad-hoc responses.


Many of the issues that developing countries are facing are ones that developed countries are facing. Are you being agile? Do you have the right people in the right places? Are the stakeholders actually coordinating?

Speaker

Caroline Troein


Reason

This comment challenges the traditional developed/developing country dichotomy in cybersecurity discussions and identifies universal challenges that transcend economic development levels. It reframes capacity building as a shared global challenge rather than a one-way transfer.


Impact

This insight shifted the conversation from a donor-recipient model to a more collaborative, peer-learning approach. It influenced the discussion toward recognizing that all countries face similar fundamental challenges in coordination, agility, and human resources, regardless of their development status.


We have started a cyber camp of young people in the region… we believe those are things which take time. And sometimes they prevent you not talking to each other for different trust reasons, which are not only cyber security.

Speaker

Floreta Faber


Reason

This comment introduces a long-term, generational approach to building trust and cooperation that acknowledges non-technical barriers to collaboration. It recognizes that geopolitical and historical tensions can impede technical cooperation and proposes a creative solution.


Impact

This insight added a temporal dimension to the discussion, emphasizing that effective cooperation requires sustained, long-term investment in relationships. It influenced the conversation toward recognizing that technical cooperation cannot be separated from broader political and social contexts, and that innovative approaches are needed to overcome these barriers.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional approaches and introducing more nuanced perspectives. Floreta’s reframing of cybersecurity as a mindset rather than just technology set the tone for a human-centered discussion throughout. Pavel’s economic comparison and crisis metaphor established both the scale of the challenge and the urgency of proactive cooperation. Timea’s call for ‘smarter policy’ introduced a sophisticated policy framework that moved beyond simple regulatory approaches. Caroline’s observation about universal challenges across development levels democratized the discussion and promoted peer learning. Finally, Floreta’s generational approach to trust-building added a long-term strategic dimension. Together, these comments elevated the discussion from technical problem-solving to strategic, human-centered, and politically-aware approaches to cybersecurity cooperation. They created a narrative arc that moved from threat assessment to collaborative solutions, emphasizing that effective cybersecurity requires sustained investment in relationships, innovative policy approaches, and recognition of the human factors that underpin all technical systems.


Follow-up questions

How can we make arrangements for sharing sensitive technical data across borders without making it public, while still allowing technical people to defend their systems better?

Speaker

Eirik (participant from IT company owned by the Church of Norway)


Explanation

This addresses a critical gap in international cybersecurity cooperation where technical experts have valuable threat intelligence but lack secure channels to share it across borders for collective defense


How do we handle attribution when we find out where cyber attacks came from, and what do we do with this information diplomatically?

Speaker

Floreta Faber


Explanation

This highlights the challenge of translating technical attribution findings into appropriate diplomatic responses and the need for clear protocols on how to act on attribution intelligence


How do our capacities hold up when attacks are severe and target multiple infrastructures simultaneously?

Speaker

Floreta Faber


Explanation

This addresses concerns about scalability of national cyber response capabilities during coordinated or large-scale attacks affecting multiple critical infrastructure sectors


How do we prepare for what a quantum future would look like in terms of cybersecurity?

Speaker

Caroline Troein


Explanation

This identifies the need for forward-looking research and preparation for quantum computing’s impact on current cybersecurity measures and critical infrastructure protection


How can we ensure security for essential services without overburdening the companies that we rely on to operate and innovate them?

Speaker

Timea Suto


Explanation

This addresses the balance between regulatory requirements for cybersecurity and maintaining business viability, particularly for smaller companies in critical supply chains


How do we handle cyber attacks combined with other types of physical attacks simultaneously?

Speaker

Lars Erik Smevold


Explanation

This highlights the need for research and planning around hybrid attacks that combine cyber and physical elements, which could overwhelm traditional response capabilities


How can the industry better engage or have incentives to engage in multilateral processes where governments discuss protection of critical infrastructure?

Speaker

Marie Humeau (moderator)


Explanation

This addresses the gap between private sector technical expertise and international policy discussions, seeking ways to improve industry participation in global governance processes


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Main Session 2: The governance of artificial intelligence

Session at a glance

Summary

This discussion focused on the governance of artificial intelligence, examining the current landscape of AI regulation and the challenges of creating inclusive, effective frameworks. The panel, moderated by Kathleen Ziemann from the German development agency GIZ and Guilherme Canela from UNESCO, brought together representatives from the private sector, government, civil society, and international organizations to discuss how different stakeholders can collaborate on AI governance.

The panelists acknowledged that the AI governance landscape has become increasingly complex, with numerous frameworks, principles, and regulatory initiatives emerging globally, including the OECD AI principles, UNESCO’s AI ethics recommendations, the EU AI Act, and various national strategies. Melinda Claybaugh from Meta emphasized that while there is no lack of governance frameworks, there remains disagreement about what constitutes AI risks and how they should be measured, suggesting the need for broader conversations about enabling innovation alongside managing risks. Mlindi Mashologu, representing the South African government, highlighted the importance of locally relevant AI governance that addresses context-specific challenges while maintaining human rights principles and ensuring AI systems are ethical, inclusive, and accountable.

Jhalak Kakkar from the Centre for Communication Governance stressed the importance of meaningful multi-stakeholder participation in AI governance processes and argued against creating a false dichotomy between innovation and regulation, advocating for parallel development of both. Jovan Kurbalija from the Diplo Foundation called for bringing “knowledge” back into AI discussions, noting that current frameworks focus too heavily on data while overlooking the knowledge dimension of AI systems. The discussion revealed tensions between different approaches to AI governance, with some emphasising the need for more regulation and others cautioning against over-regulation that might stifle innovation.

Key themes included the democratisation of AI access, the need for transparency and explainability in AI systems, the importance of addressing bias and ensuring inclusive representation in AI development, and the challenge of balancing global coordination with local relevance. The panelists ultimately agreed on the importance of continued multi-stakeholder dialogue and the need to learn from past experiences with internet governance while avoiding previous mistakes in technology regulation.

Keypoints

Major Discussion Points:

The Current AI Governance Landscape: The panelists discussed the “blooming but fragmented” nature of AI governance, with numerous frameworks, principles, and regulations emerging globally (OECD principles, UNESCO recommendations, EU AI Act, G7 Hiroshima AI process, etc.). There was debate about whether this represents progress or creates confusion and fragmentation.

Innovation vs. Risk Management – A False Dichotomy: A central tension emerged around balancing AI innovation with risk mitigation. While some panelists argued for focusing more on enabling innovation rather than just managing risks, others contended this creates a false choice – that governance and innovation must go hand-in-hand from the beginning rather than being treated as opposing forces.

Global South Perspectives and Local Relevance: Significant emphasis was placed on ensuring AI governance is locally relevant and includes voices from the Global South. Panelists discussed the need for context-aware regulation, capacity building in developing countries, and avoiding a “one-size-fits-all” approach that might not address specific regional needs and priorities.

Knowledge vs. Data in AI Governance: A philosophical discussion emerged about shifting focus from “data” back to “knowledge” in AI governance frameworks. This included concerns about knowledge attribution, preserving local and indigenous knowledge, and ensuring that AI systems don’t centralize and monopolize human knowledge without proper attribution.

Multi-stakeholder Participation and Transparency: Throughout the discussion, panelists emphasized the importance of meaningful multi-stakeholder engagement in AI governance processes, moving beyond tokenistic participation to genuine influence on outcomes. This included calls for transparency in risk assessments and decision-making processes.

Overall Purpose:

The discussion aimed to examine how different stakeholders can collaborate to shape AI governance frameworks that are inclusive, effective, and globally coordinated while respecting local contexts. The session sought to move beyond theoretical principles toward practical approaches for implementing AI governance that balances innovation with human rights protection and addresses the needs of all regions, particularly the Global South.

Overall Tone:

The discussion maintained a professional and collaborative tone throughout, though it became more animated and engaged as panelists began to challenge each other’s perspectives. Initially, the conversation was more formal with structured introductions, but it evolved into a more dynamic exchange where panelists directly responded to and sometimes disagreed with each other’s points. The tone remained respectful despite clear philosophical differences, particularly around the innovation-regulation balance and the urgency of implementing governance measures. The moderators successfully encouraged both consensus-building and healthy debate, creating an atmosphere where diverse viewpoints could be expressed and examined.

Speakers

Speakers from the provided list:

Kathleen Ziemann – Lead of AI project at German development agency GIZ (Fair Forward project), Session moderator

Guilherme Canela – Director at UNESCO in charge of digital transformation, inclusion and policies, Session co-moderator

Melinda Claybaugh – Director of privacy policy at Meta

Jovan Kurbalija – Executive director of Diplo Foundation, based in Geneva with background from Eastern Europe

Jhalak Kakkar – Executive director of the Centre for Communication Governance in New Delhi, India

Mlindi Mashologu – Deputy director general at South Africa’s Ministry of Communications and Digital Technology (filling in for the deputy minister)

Online moderator

Audience – Multiple audience members who asked questions during the session:

  • Diane Hewitt-Mills – Founder of global data protection office called Hewitt-Mills
  • Kunle Olorundare – President of Internet Society Nigerian chapter, from Nigeria, involved in advocacy
  • Pilar Rodriguez – Youth coordinator for the Internet Governance Forum in Spain
  • Anna – Representative from R3D in Mexico
  • Grace Thompson – Online participant (question relayed through online moderator)
  • Michael Nelson – Online participant (question relayed through online moderator)

Full session report

AI Governance Discussion: Stakeholder Perspectives on Inclusive and Effective Frameworks

Executive Summary

This discussion on artificial intelligence governance brought together diverse stakeholders to examine current AI regulation approaches and explore pathways towards inclusive frameworks. Moderated by Kathleen Ziemann from GIZ’s Fair Forward project and Guilherme Canela from UNESCO, the session featured representatives from Meta (Melinda Claiborne), the South African government (Mlindi Mashologu), the Centre for Communication Governance (Jhalak Kakkar), and the Diplo Foundation (Jovan Kurbalija). The discussion covered the current fragmented landscape of AI governance, debates around balancing innovation with risk management, and the importance of Global South perspectives in developing effective AI frameworks.

Current AI Governance Landscape

Fragmented Framework Development

Kathleen Ziemann opened by describing the current AI governance landscape as “blooming but fragmented,” highlighting numerous parallel initiatives including:

  • OECD AI principles
  • UNESCO’s AI ethics recommendations
  • EU AI Act
  • G7 Hiroshima AI process
  • Various national strategies emerging globally

Melinda Claiborne from Meta characterized this as “an inflection point” where many frameworks exist but fundamental questions remain about implementation and effectiveness. She noted that while governance frameworks are abundant, significant disagreement persists about what constitutes AI risks and how to measure them scientifically.

Jovan Kurbalija provided additional context, noting that AI has become commoditized with 434 large language models in China alone. This proliferation has shifted the risk landscape from concerns about a few powerful AI systems to challenges arising from widespread deployment of numerous AI models.

Panelist Perspectives

Private Sector View: Meta’s Position

Melinda Claiborne argued that the AI governance conversation may have become overweighted towards risk and safety concerns. She advocated for broadening the discussion to include opportunity and enabling innovation, asking: “Can we talk about opportunity? Can we talk about enabling innovation? Can we broaden this conversation about what we’re talking about and who we’re talking with?”

Claiborne emphasised that existing laws and frameworks already address many AI-related harms and suggested assessing their fitness for purpose rather than creating new regulatory structures. She advocated for risk assessment processes that are “objective, transparent, and auditable, similar to GDPR accountability structures.”

Civil Society Perspective: Governance from the Start

Jhalak Kakkar directly challenged the framing of innovation versus regulation as competing priorities, arguing it creates a “false sense of dichotomy.” She contended that innovation and governance must go hand in hand, emphasising that “we need to be carrying out AI impact assessments from a socio-technical perspective so that we really understand impacts on society and individuals.”

Kakkar stressed the importance of meaningful multi-stakeholder participation and strengthening mechanisms like the Internet Governance Forum (IGF) to ensure holistic input from diverse perspectives. She emphasised that transparency and explainability are crucial when bias affects decision-making systems.

Government Perspective: Context-Aware Approaches

Mlindi Mashologu from South Africa emphasised that “there is no one-size-fits-all when it comes to AI,” advocating for foundational approaches grounded in equity. He promoted “context-aware regulatory innovation” through adaptive governance models including regulatory sandboxes that enable responsible innovation while managing risks.

Mashologu highlighted South Africa’s G20 presidency work on developing a toolkit to reduce AI-related inequalities from a Global South perspective. He emphasized that AI governance must ensure technology empowers individuals rather than undermining their rights and dignity.

International Governance Perspective: Knowledge vs. Data

Jovan Kurbalija introduced a unique perspective by arguing for a fundamental shift in AI governance language from data back to knowledge. He observed that while the World Summit on the Information Society originally focused on knowledge, current frameworks have moved to focus on data instead. “AI is about knowledge,” he argued, not merely data processing.

Kurbalija also provided a nuanced view on bias, arguing against the “obsession with cleaning bias” and distinguishing between illegal biases that threaten communities and natural human biases that reflect legitimate diversity. “We should keep in mind that we are biased machines,” he noted.

Key Themes Discussed

Innovation and Risk Management Balance

The discussion revealed different perspectives on balancing innovation with risk management. While Claiborne emphasised concerns about over-regulation stifling innovation, Kakkar argued for implementing governance mechanisms from the beginning to prevent harmful path dependencies. Mashologu offered a middle ground through adaptive governance approaches like regulatory sandboxes.

Global South Inclusion and Local Relevance

Multiple panellists emphasised the importance of ensuring AI governance frameworks include meaningful Global South participation and local relevance. Mashologu highlighted regional initiatives like the African Union AI strategy, while Kakkar emphasised international coordination through existing multi-stakeholder forums.

Human Rights and Transparency

There was broad agreement on anchoring AI governance in human rights principles and ensuring transparency and explainability, particularly for systems affecting human lives. However, disagreements remained about implementation approaches, with industry preferring self-regulatory mechanisms and civil society advocating for external oversight.

Audience Engagement

Environmental and Social Justice Concerns

An audience member from R3D in Mexico challenged the panel about environmental impacts and extractivism related to AI infrastructure development, particularly regarding data center placement and resource extraction. This highlighted how AI governance discussions often overlook broader environmental and social costs that disproportionately affect Global South communities.

Practical Implementation Questions

Online questions addressed specific frameworks like the Council of Europe’s convention and practical implementation challenges. Audience members also raised concerns about bias in data collection and the need for inclusive approaches that account for multiple stakeholder perspectives.

B Corp Social Offset Proposal

One audience member proposed a B Corp social offset model for AI companies, suggesting mechanisms for corporate accountability beyond traditional regulatory approaches.

Areas of Agreement and Disagreement

Consensus Points

Panellists agreed on several fundamental principles:

  • Importance of multi-stakeholder participation
  • Need for transparency and explainability
  • Value of building upon existing legal frameworks rather than creating entirely new structures
  • Importance of human rights as foundational principles
  • Need for contextual adaptation of governance frameworks

Persistent Tensions

Key disagreements included:

  • Emphasis and timing of governance mechanisms (early implementation vs. avoiding over-regulation)
  • Adequacy of existing frameworks versus need for AI-specific mechanisms
  • Preferences for self-regulation versus external oversight
  • Approaches to addressing bias and ensuring inclusivity

Conclusions

The discussion highlighted both the complexity of AI governance challenges and the diversity of stakeholder perspectives. While panellists agreed on many fundamental principles, significant differences remained regarding implementation approaches and priorities. The conversation demonstrated the ongoing need for inclusive dialogue that brings together diverse perspectives while addressing practical governance challenges.

The session underscored the importance of ensuring Global South voices are meaningfully included in AI governance development, and that frameworks must be adaptable to local contexts while maintaining coherent overarching principles. The debate between innovation enablement and risk management continues to be a central tension requiring careful navigation as AI governance frameworks evolve.

Session transcript

Kathleen Ziemann: Welcome. Welcome to the main session on the governance of AI. My name is Kathleen Ziemann. I lead an AI project at the German development agency GIZ. The project is called Fair Forward. I will be moderating this session today together with Guilherme. Guilherme, maybe you introduce yourself as well. Hello, good morning everyone. My name is Guilherme Canelaand I’m the director in UNESCO in charge of digital transformation, inclusion and policies. A real pleasure to be here with Kathleen and this fantastic panel.

Yes, so Guilherme and I are very excited to have representatives from different regions and sectors here on the panel that will discuss AI governance with us. And dear panelists, thank you so much for coming. Let me briefly introduce you. So, to our left we have Melinda Claiborne, director of privacy policy at Meta. Welcome Melinda. And next to Melinda sits Jovan Kurbalija, executive director of Diplo Foundation, based in Geneva but with a background from Eastern Europe. And next to you, Jovan, sits Jhalak Kakkar. Welcome Jhalak, happy to have you. Jhalak Kaka is the executive director of the Centre for Communication Governance in New Delhi, India. And we are very happy also to welcome you, Mlindi Moshlogu.

You are filling in for the deputy minister from South Africa from the Ministry of Communications and Digital Technology and your title is you are the deputy director general at the ministry. Thank you all for coming and very sad that Mondly couldn’t come. He was affected by the recent activities in Israel and Iran and his flight could not come through. Well everyone, thank you for coming. Before you will be able to set the scene from your perspective, I would love to give a brief introduction on what we perceive under AI governance at the moment and also give us an idea how to discuss this further. As this IGF’s theme is building digital governance together, we want to discuss how we can shape AI governance together as we still observe different levels and possibilities of engagement in sectors and regions. I would say that currently the AI governance landscape is blooming.

Yes, we have AI governance tools like principles, processes and bodies emerging globally and I think somehow we also can lose track in that like blooming landscape, just to name a few. So in 2019, the OECD issued its AI principles followed by UNESCO’s recommendations on the ethics of AI in 2022. In 2023, I don’t know if you remember still, but AI companies such as OpenAI, Alphabet and Meta made also voluntary commitments to implement measures like watermarking AI-generated content and finally last year the EU AI Act came into force as the first legal framework for governing AI. Additionally, existing fora and groups are addressing AI and its governance. For example, last year the G7 launched the Hiroshima AI process and G20 has declared AI a key priority this year and I think we’ll be hearing more about that from you, Melinda, later. And then we have also various declarations and endorsement and significant communications issued by many like the Africa AI declaration that was signed in Kigali, for example, or the declaration on responsible AI that was signed in Hamburg recently.

And as a core document for 193 member states, the UN’s Global Digital Compact calls for concrete actions for global AI governance by establishing, for example, a global AI policy dialogue and a scientific panel on AI. So when we look at all these efforts, it seems like AI governance is not only a blooming but also a fragmented landscape with different levels and possibilities of engagement. So how do you, dear panelists, perceive this and what are your perspectives but also ideas on the current AI governance? What should be changed? What is missing? We would love to start with your perspective, Melinda, from the private sector. Feel free to use the next three to four minutes for an introduction statement and yes, there you go.

Melinda Claybaugh: Great, thank you so much and thanks for having me. It’s a pleasure to be here. Just a little perspective to set the context from where Meta sits in this conversation. So at Meta, I think everyone’s familiar with our products and our services, our social media and messaging apps. But in the AI space, we sit at two places. One, we are a developer of large language models, foundational Gen AI models. They’re called Lama and many of you might be familiar with them or familiar with applications built on top with them. So we are a developer in that sense and we focus largely on open source as the right approach to building large generative AI models.

At the same time, we build on top of models and we provide applications and systems through our products. So we’re kind of in both camps, just to situate folks. I was glad that you laid out the, I mean it really, in the last couple of years, it’s incredible the number of frameworks and commitments and principles and policy frameworks. It’s head spinning at times, having lived through that. And so I think it is really important to remember there’s no lack of governance in this space. But I do think that we are at an interesting inflection point. And I think we’re all kind of wondering, well, what now? We set down these principles, we have these frameworks, companies like Meta, for example, has put out a frontier AI framework that sets out how we assess for catastrophic risks when we’re developing our models and what steps we take to mitigate them.

And yet there’s still a lot of questions and concerns. And I think we’re at this inflection point for a few reasons. One, we don’t necessarily agree on what the risks are and whether there are risks and how we quantify them. So I think we see different regions and countries want to focus more on innovation and opportunity. Other folks want to focus more on safety and the risks. There’s also a lack of technical agreement and scientific agreement about risks and how they should be measured. I think there’s also an interesting inflection point in regulation. The EU, for example, was very fast to move to regulate AI with the landmark AI Act. And I think it’s running into some problems. I think there’s now kind of a consensus amongst key policymakers and voices in the EU that maybe we went too far and actually we don’t know whether this is really tied to the state of the science and how to actually implement this in a way. And now they’re looking to pause and reconsider certain aspects of digital regulation in Europe.

And then a lot of countries are kind of looking for what to do and are looking for solutions for how to actually adopt and implement AI. And so I don’t think I have an easy answer, but I think we are at a moment to kind of take stock and say, okay, we’ve talked about risk. Can we talk about opportunity? Can we talk about enabling innovation? Can we broaden this conversation about what we’re talking about and who we’re talking with and making sure the right voices, the right representation from little tech to big tech from all corners of the world are represented to have these conversations about governance.

Kathleen Ziemann: Thank you very much, Linda. I would love to continue with you, Mlindi, and giving us the perspective from the South African government. How do you perceive the current landscape? What is important to you at the moment?

Mlindi Mashologu: Thank you. Thank you, Kathleen, for that. I think from the South African government what we see, I think it’s a general knowledge that we see that AI is a true general purpose technology, which is the same as like electricity or Internet, but also it does affect various sectors of our economy. But also, you know, we see that, you know, with such a power, transformative power, so it comes with the responsibility which we want to ensure that, you know, the AI systems are not only effective, but also, you know, they are ethical, inclusive, and accountable.

So, I think it’s one of the first things that we want to do. But also, also to govern AI effectively, we’re trying to come with a shared vocabulary and principled, you know, foundation as reflected in some initiatives that you mentioned before, like OECD principles, UNI level. So, we are trying to make sure that we are not only focusing on AI, but also we just need to also focus on making sure that we do have required sector-specific policy interventions, you know, that are technically informed and locally relevant, so that’s what we’re trying to do, because we see that if we were to look on AI in financial services, it would be different in regulating it in AI in, say, agriculture.

So, we’re trying to come, you know, with different, you know, different approaches, but also, you know, we’re trying to make sure that we are not only focusing on AI, but also some of the areas that we are focusing on as a government, as well, is from the regional point of view, is to make sure that, you know, our approach is grounded from the principle of data justice, which also puts, you know, human rights, economic equity, as well as environmental sustainability, you know, at the center of AI, but also, we recognize that, you know, the impact of climate change, you know, on human rights, and environmental sustainability, and also, you know, to reinforce, you know, the historical inequities, so that’s one of the concrete proposals that we’re looking into, but also, the other area that we’re focusing on is the area of sufficient explainability, which is the requirement for the AI decisions, you know, it’s one of the areas that we’re advocating, especially, you know, those that impact, you know, human lives and livelihoods, so we see those areas, you know, as well as, you know, the development of human rights.

So, you know, if you were to look on the areas of, say, for instance, credit scoring, you know, predictive policing, you know, healthcare diagnostics, you know, we need to have a right to understand how these decisions have actually come, you know, and how the AI systems are trying to make those decisions, but also, further from that, one of the areas that we are following, as well, is the area of human in the loop learning, so, you know, the human learning, you know, development of AI systems, but from the design, and as well as deployment, so humans must guide, and when needed, override automated systems, so this also includes, you know, the reinforcement learning with human feedback and clear thresholds for the interventions in the high-risk domains.

I think the last point that I just want to focus on is, you know, our participation in terms of the global AI governance has been, you know, very, very important. So, you know, we have a lot of partnerships that are there, so, from our side as a country, in terms of the policy that we are currently developing, so we are looking, you know, to leverage on the areas, you know, that have already developed some frameworks, which include your African Union data policy framework, so we are building the models, you know, you know, of governance rooted in equity, so, you know, we are working with the African Union, we are working with the African Union, and we don’t want the AI to replace humans, but we want the AI to actually work with the humans in terms of, you know, assisting us in some of the most pressing needs of our society.

Kathleen Ziemann: Thank you very much, especially the local relevance of AI governance will be also discussed in our round later, so that is a very important point you made. Thank you very much. I think, you know, I think that you are very much rooted in the civil society, so I think that you are very much rooted in the civil society, so I think that you are very much rooted in the civil society, so if you could bring these two perspectives together, that would be very much appreciated.

Jhalak Kakkar: Thank you. Thank you, Kathleen. I think, you know, when we think about AI governance, one is what is the process and input into the creation of AI governance either internationally or domestically? And then, actually, what is the substance of, you know, what we are structuring AI governance as? And if I can first just take a couple of minutes to talk about the process. I think, you know, if we learn from the past, it is very important to have multi-stakeholder input as any sort of governance mechanism is being created, because different stakeholders sitting at different parts of the ecosystem are able to bring forth different perspectives, and we end up in a more multi-stakeholder environment.

I think one of the things that we have increasingly seen is a shift towards multilateralism, and I think, you know, at the IGF, it is a perfect place to talk about the need to focus on multi-stakeholderism and enabling meaningful participation, and not participation that is being done as a matter of form, but participation that actually is meaningful. So, I think, you know, one of the things that we have increasingly seen is a shift towards multilateralism, and I think, you know, at the IGF, it is a perfect place to talk about multilateralism and enabling meaningful participation, but participation that actually impacts outcomes and outputs.

I think the second piece that I want to talk about when I talk about process is increasing the need to meaningfully engage with a broader cross-section of civil society academia and researchers, so not only those bringing perspectives from the global south, but also those bringing perspectives from the global south, so not only those bringing perspectives from the global south, but also those bringing valued and informed perspectives from the global south. The way, you know, a toaster works in the United States, versus the way it works in Japan, versus the way it works in Vietnam or India, it is pretty much the same, but, you know, AI as a technology will be shaped in its, in the way it functions, in the way it impacts very differently in different contexts.

So, I think the third piece that I want to talk about is creating a process that is meaningful, that is meaningful to civil society across the global majority is very important to enable, and we can talk about maybe later in this conversation what some of the challenges are that have been preventing that currently. I think if we talk about the substance of AI governance, one is the piece around how do we really, truly democratise access to AI? We’ve seen a lot of technology development historically has been concentrated in certain regions. At a moment in time when we’re talking about the WSIS plus 20 review, I want to go back to something that was sort of articulated in the Tunis agenda, which spoke about facilitating technology transfer to bridge developmental divides.

While it’s happened, perhaps, with ICANN and ISOC, you know, supporting digital literacy and training, there’s sort of been less substantial moves to operationalisation of AI. I think in this context, it’s very important to think about how, from the get-go, we enhance capacity of countries to create local AI ecosystems so that we don’t have a concentration of infrastructure and technology in certain regions. We talk about mechanisms such as open data set platforms, you know, some kind of AI commons, and, you know, how do we facilitate that? How do we facilitate that? How do we facilitate that access to technology, and how do we facilitate that access to AI commons, and really think about how do we democratise access to this technology so that, you know, we have AI for social good which is contextually developed for different regions and different contexts? I think the last one I want to talk about is regulation and governance is not a bad word. Very often, I’m hearing conversations about, you know, we’ve talked about risk.

Let’s focus on innovation now. I think it’s creating a false sense of dichotomy. I think they have to go hand in hand. And, you know, I think in the past, the mistakes that we’ve made is sort of letting, you know, not sort of developing governance mechanisms from the get-go. And it doesn’t have to be heavy governance and regulation from the get-go, right? I think at this stage, and Melinda was talking about the fact that we don’t understand what the risks are, so we need to be implementing risks. We need to be carrying out AI impact assessments. This has to be done from a socio-technical perspective so that we really understand impacts and society impacts on individuals because until otherwise, you know, we’re going around in circles talking about we don’t know what the risks are, we don’t know what the harms are, we don’t know how it’s going to impact us.

So let’s start setting up mechanisms, whether it’s sandboxes, you know, whether it is AI impact assessments, whether it is audits. I know that, you know, we’ll go back to conversations on, but there’s a regulatory burden to this. It’s going to slow down innovation. But are there ways we can start to think about how we can operationalize these in light-touch ways so that we can parallelly start to understand what are the harms, what are the impacts that are coming up so that we don’t create part-dependencies for ourselves later on where then we’re just doing band-aid solutions.

So I think that’s a big part of what we’re trying to do. So I think it’s important to start with the approach of understanding the impact of AI on the whole. And I think it’s important to also think about the evolution of technology so it’s beneficial to our society and individuals rather than landing up in a space where it’s developed in a direction we didn’t quite sort of envisage and we didn’t realize unintended consequences that would come with shaping it in a particular way. I’ll stop here and come in with more points later. I’ll stop here and come in with more points later.

Kathleen Ziemann: Thank you. Jovan, you have a lot of practice in AI, you call yourself a master in AI. role in AI, but also on how AI is governed in Europe.

Jovan Kurbalija: Thank you, Kathleen. It’s a really great pleasure to be here today. When I was preparing cognitively for the session, I thought, I asked myself how we can make a difference. And one point which is fascinating is that in three years’ time, AI landscape has changed profoundly. Almost three years ago, when ChargePT was released, it was magical technology. It can write you a poetry, it can write you a thesis, whatever you want. And at that time, you remember the reactions were, let’s regulate it, let’s ban it, let’s control it.

There were knee-jerk reactions, let’s establish something analogous to the nuclear agency in Vienna for AI. And there were so many ideas. Fast forward, today we have a realism, and for those colleagues from Latin America, metaphor could be that AI governance is a bit of a magical realism, like Llosa, Marques, and others. You have the magic of AI, like any other technology. And I guess many of us in this room are attracted to internet and AI and digital because of this magical element. But there is a realism. And I will focus now on this realism. First point is that AI became commodity. We heard yesterday that in China there are 434, as of yesterday, large language models. I think similar statistics is for other countries worldwide.

Therefore, AI is not something which is just reserved for a few people in the lab. It’s becoming affordable commodity. It has enormous impact. One impact is that you can develop AI agent in five minutes. Exactly, our record is four minutes, 34 seconds. That’s basically unthinkable. Only a few years ago, it was a matter of years of research. There’s a first point. Therefore, the whole construct about risks is basically shifting towards this affordable commodity. Second point is that we are now on the edge where we will have basically AI on our mobile. And then the question we can ask is, today we will produce some knowledge here in our interaction.

Should that knowledge belong to us, to IGF, to our organizations, or to somebody else? Therefore, this is the second point of bottom-up AI. We will be able to codify our knowledge, to preserve our knowledge, individual or group, family, and that will profoundly shift AI governance discussions. And the third point in this context which I would like to advance in this introductory remark is that we have to change our governance language. If you read WSIS documents, both Tunis and Geneva, the key term was knowledge, not data. Data was mentioned here and there. Now, somehow, in 20 years’ time, I hope it will be reflected in WSIS Plus 20, knowledge is completely cleaned. You don’t have it in GDC, you don’t have it in the WSIS documents, you have only data. And AI is about knowledge. It’s not just about data.

That’s an interesting framing issue. In discussion, I hope that we can come to some concrete issues about, for example, sharing weights, through that sharing our knowledge, the way how we can protect our knowledge, especially from perspectives of developing countries. Because we are on the edge of the risk that that knowledge can be basically centralized and monopolized, and we had all the experiences in the early days of the Internet, where the risk that anyone can develop digital solution, Internet solution, ended at the end of the day that just a few can do it. And that wisdom should help us in developing AI governance solution, and we can discuss concrete ideas and proposals.

Kathleen Ziemann: Thank you very much, Jovan, also for the references to the whole history of the Internet. I think that’s always great to have here as expertise on the panel at IGF. Thank you all for setting the scene. I think we got already an idea about the different perspectives we have here, and also the possibilities for synergies, but maybe also for conflict. And that’s also a bit our role as moderators, to bring in these two different possibilities in you being on the panel. We would love to start now a more open round of discussion here. We have prepared questions for you. We will start, but then we will also hope that something evolves in between you, and that you can also refer to each other and answer a bit of the questions you’ve already put in the room here. But first of all, we would start with you, Mlindi, in terms of also giving us an idea. You already spoke about the local relevance of AI, how to insert that into global processes, and as South Africa is currently holding the G20 presidency, how will you make sure within your functions that the local relevance of AI and the AI frameworks that South Africa also has established will be included in the global dialogue here?

Mlindi Mashologu: Thank you, Kathleen. I think it’s important to know that AI is a priority in terms of our G20 presidency. I think it’s because we see that the reason why we put it in there, we also picked up that, I mean, how we then govern it, it will determine how we keep it inclusive, you know, and just how our societies will actually be tomorrow. So, from our approach, you know, so what we have tried to do is to ground, you know, the governance in two complementary dimensions, which one being macro foresight as well as micro precision.

So, from the macro foresight point of view, we look on AI from the long-term view, and also to recognize, you know, its impact, you know, in society for a much longer period, shaping, you know, our economy. So, but also from our G20 agenda, then we are championing the development of a toolkit, which will try to reduce the inequalities, you know, connected to the use of AI. So, this toolkit also seeks to identify the structural and systematic ways in which AI can both amplify and redress the inequality, especially from the global south. But also, we see that, you know, this foresight requires, you know, the geopolitical realism, because, I mean, AI, we see that it cannot be, you know, dominated by, you know, a handful of countries or private sector actors, but it has to be, you know, multilateral, multi-stakeholder, as well as multi-sectoral. So, that is why, then, you know, we are working on expanding, you know, the scope of the participation, you know, bringing more voices from the continent, from the global south, and also some of the underrepresented communities, in terms of the center of the AI governance dialogue. But also, if we were to look, then, matching, you know, the macro vision with the micro precision, whereby we’re looking on the ability to address granular context-specific realities.

So, as I highlighted before, that, I mean, we see that, you know, there is no one-size-fits-all when it comes to AI. So, from there on, then, we advocate, you know, for the context-aware regulatory innovation, so, which include your regulatory sandboxes, human interlock mechanism, but also adaptive policy tools that can be calibrated to center-specific risks and benefits. But also, one of the areas that we are focusing on, as well, is to ensure that we do capacity building, develop local talent, research ecosystem, as well as ethical oversight mechanisms, because we believe that the AI governance must be owned as much as, you know, for all sectors of our economy, being on the rural areas to, you know, the cities and all that.

But also, from our presidency, we also aim to bridge, you know, the governance within the regional frameworks, so we align with the African Union’s emerging AI strategy, the Nepal’s science, technology, innovation frameworks, as well as the regional policy harmonization through the SADC. So, we see that this integration at regional levels needs a peripheral, but also they are foundational in terms of, you know, the global governance agenda. I think, finally, in terms of RG20, we just would like to call our partners and international institutions to support the distributed AI governance architectures, so that, you know, we can all be inclusive and, you know, we can have the equitable, as well as, you know, make sure that the benefits of AI, you know, can have, you know, much meaningful in terms of our society, while we are also addressing, you know, the associated risk, you know, related to AI, I think.

Guilherme Canela: So, Melinda, moving to you now, actually, Jhalak stole my thunder when I was preparing the follow-up question to you, because I think she touched on a point that I’m sure several in the audience and online have thought when you were speaking about what she called the false dichotomy between innovation and protecting human rights, right? Because at the end, the objective of governance, if it’s done in alignment with the international human rights law, is to protect human rights for all, not only for the companies, right? So, how you respond to this, right?

You framed, of course, very briefly, as if it was an antagonism between those two things. At the same time, we know all companies, including yours, are investing in human rights departments, reports, and when there are specific issues, like elections, on how to deal with these technologies and the risks, for example, for elections. But yet, there is a lot of scepticism regarding the way the private sector, not only your company, is dealing with this situation. So, if you could go a bit deeper on, actually, what Jhalak was saying about what, in her view, is a false dichotomy on those two things, right?

Melinda Claybaugh: Yeah, I mean, I guess I would agree, to be provocative. In fact, I mean, I think that what I’m trying to say is that we need to look at everything together, and it seems that the debate about AI, and by AI, to be clear, I’m talking about advanced, you know, generative AI. I think we tend to talk about AI kind of loosely, but the conversations to date at the kind of international institution level and the framework and commitment level have really been about the most advanced generative AI. Those conversations have largely been focused around risk and safety risk, and I think that’s an important piece, of course, and we’ve implemented a frontier AI safety framework to address concerns about catastrophic risks. I think, however, the conversation around harm and risk, two things.

One, I think we need to be very specific about what are the harms we’re trying to avoid, and as you point out, a lot of the harms we’re trying to avoid are harms that already exist that we’ve been trying to deal with. So, people talk about misinformation, people talk about kids, people talk about all the things that are existing problems that have existing policy frameworks and solutions to varying degrees that differ in different places. What I am trying to convey is that we also need to be talking about enabling the technology, not to say ignoring risk, not to say not having that conversation, but we’re missing a key element if we’re not talking about everything together.

Because otherwise it becomes overweighted in one direction, and, you know, I don’t think there’s a global consensus around the idea that advanced generative AI is inherently dangerous and risky. I think that’s a live question that a lot of people have opinions about, but there is a lot of interest and opinions about the benefits and advances of AI, and so I think that all needs to be brought together into a conversation. I will also say that there’s existing laws and frameworks that are already in place, and so I think even the pre-date chat GPT, right, and so we have laws around the harms that people are talking about around copyright and data use and misinformation and safety and all of that. We have legal frameworks for it, so I would like to see attention around how are those legal frameworks fit for purpose or not with the new technology, rather than seeking to regulate the technology.

Kathleen Ziemann: Thank you, that’s a very interesting aspect that Jhalak was also touching upon a bit, especially on that idea whether we can use the already existing laws and frameworks in the context of this new technology. Jhalak, how do you perceive this? Do we have all the rules already, and if not, what is missing?

Jhalak Kakkar: Yeah, I think, you know, there’s been a lot of conversation around whether there is existing regulation that can apply to AI, and whether there’s need for more regulation, and I think there are several existing pieces of legislation that would be relevant in the context of AI, just to name a few. Data protection, competition antitrust law, platform governance laws in different countries, consumer protection laws, criminal laws. So, yes, I think I also agree with Melinda’s point that we need to think about how maybe some of these laws are fit for purpose.

Do they need to be reinterpreted, reimagined, amended to account for a different context that AI brings in? I mean, if I can give an example, the way we’ve seen the need for traditional antitrust competition law to evolve in the context of digital markets, you know, when internet platforms came in, you could have said we have existing competition law, we have existing antitrust law, and that’s going to apply, and we have seen over the last couple of decades that it is not fit for purpose to deal with, you know, the new realities of network effects, data advantage, zero price services, multiple sided markets, that have come in with the advent of internet platforms.

Similarly, we already see a hot debate happening around intellectual copyright laws, whether copyright law is well-positioned to deal with the unique sort of situation that has arisen where, you know, companies are training their LLMs on, you know, a lot of knowledge and data available on the internet, relying on the fair use exception. What was the intention of the fair use exception under copyright law? It was that big publishers should not amass a lot of knowledge with them, and it gives people like you and me access to use that knowledge and reference that knowledge and build on that knowledge. But, you know, it’s an interesting situation where you have large companies now sort of, you know, leveraging fair use.

So I think, you know, we already have courts around the world, you know, dealing with this issue. I’m sure legislatures are going to deal with it, and it’s a question that I think as a society we have to think about is, yes, you know, there is, you know, development and new things that these companies are doing, you know, fundamentally maybe there’s a transformation that they’re doing when they’re building on this, but you know, what are we losing out? What are the advantages in weighing all of that to think through? I think coming back to, you know, the false dichotomy point, I want to go back to that. I think, yes, we know a lot of harms that have already arisen in the digital and internet platform context. We’re well aware of that, and we’re looking out for that. Civil society academia researchers as, you know, we see AI, and if we’re talking more specifically about LLMs.

But those are existing harms that we’re looking for. There are a lot of harms that we don’t know may exist, and just to give an example, I don’t think 15 years back we thought about the kind of harm social media platforms would have on children. It just wasn’t something that was envisaged. I mean, maybe someone could have envisaged, you know, see some content, but, you know, the mental health impacts, the kind of cyber bullying, the extent and nature of it, a lot of this is unintended and unenvisaged, and I think unless we are scrutinizing these systems, and it’s not only a question of catastrophic risk, I think the, you know, we have to think about individual level impacts and societal level impacts, and unless we’re engaging with these systems and understanding these systems from the get-go, those impacts and implications and negative consequences will only surface five to ten years from now, and we cannot only rely on, you know, while it’s wonderful to see companies heavily investing in human rights teams, trust and safety teams, you know, as a space we didn’t have trust and safety ten years back, so it’s a new space that has grown.

You have so many professionals coming into this space with specialized skill sets, and it’s great to see that, but we’ve also seen that companies have never been particularly adept at only working under the realm of self-regulation. I mean, whether, and this is across industries, I’m not only pointing to tech, you know, we’ve seen that time and time again over the last 150 years of, you know, really, you know, when we talk about industrial regulation that’s been coming through, so I think we we have to move beyond the sense that companies will self-regulate. Very often they don’t disclose harms that are apparent to them and we need external regulators, we need communities to be engaging, a bottom-up approach, civil society to be engaging, multilateral institutions to be coming in. We need the development of guidance, guidelines to operationalize the AI principles that we’ve all been talking about and working on over the last five, seven, eight years. So I think we have to move forward into the next phase of AI governance.

Guilherme Canela: Thank you, very interesting. So now what’s going to happen, I will do a follow-up question to Jovan, but then we are going to open to you. So if you want to start queuing on the available mics, you are welcome to do it. Jovan, let’s go back to the magical realism and the issue of getting back knowledge to the discussion. It’s a very interesting point you raised. You probably remember when there was the Tunis round of the World Summit, UNESCO published a very groundbreaking report called Towards Knowledge Societies. It’s very interesting, until today, every week that report is one of the most downloaded reports in the UNESCO online library, which shows that independently of what we are discussing here in these very limited circles, the people overall are still very much interested in the knowledge component of this conversation.

So doing this preamble to ask you to go a bit deeper, so how we bring knowledge back to this conversation, of course connecting with the new topics, of course data is a relevant issue, we can’t ignore the discussion of data governance, but I mean the South African presidency has three main topics, correct me if I’m wrong, meaning the solidarity, equality and sustainability. And if you read that report of UNESCO of 20 years ago, connecting with the challenges of the then Information Society, you’ll see those three keywords appearing maybe in different ways. So people like Manuel Castells and Nestor Garcia-Conclini were telling those things. So what is your view on how we get back to this important part of the conversation when we are looking to the AI governance frameworks?

Jovan Kurbalija: Sure, it’s good that you brought this, by the way, excellent report. Two reports are excellent, the UNESCO and World Bank report on the digital dividends, those are landmark. What worried me was, I studied and I didn’t want to bring it, but you told that you don’t mind controversies, even UNESCO, which set the knowledge stage with that excellent report, backpedaled on the knowledge in the subsequent years, which was part of the overall policy fashion. Data is, even in the ethical framework, the recommendation data is more present.

That’s the first point. The second point, why people download it, they react intuitively, they can understand knowledge. Data is a bit abstract, knowledge is what we are now exchanging, creating, developing. And my point is that common sense element is extremely important, and through that, through bottom-up AI, through, let’s say, preserving knowledge of today’s discussion, may be excellent questions that we’ll have. This is knowledge that was generated by us at this moment, and this is also, back to Marcus and other magical realism, you have to grasp the moment. And it’s not, it’s technically possible, it’s financially affordable, and it’s ethically desirable, if you want this trinity.

But let me just, on that, on your question, just reflect on two points of discussion. There are many false dichotomies, including in the question of knowledge. I can list, you have multilateral versus multi-stakeholder, privacy versus security, freedom versus public interest. And we can label them as false dichotomies, but I think we should make a step forward. Ideally, we should have both, multi-stakeholder, multi-lateral security, and, but sometimes you have to make trade-offs. And this is a critical, that trade-offs are done in transparent way, that you can say, okay, in this case, I’m going to multilateral solution, because governments have respective roles and responsibilities. You can find in many other fields.

And back to your question in this, bringing discussion to common sense, and the references that colleagues made. I would go, not only 150 years, or even the Napoleonic code, I would go to Hammurabi 3,400 years ago. There is a provision in Hammurabi’s law, if you build a house, and if house collapses, the builder of the house should be punished with that sentence. It was a bit, that was the time. What we are Harsh one. Harsh one. We don’t want that. What we are missing today, let me give you one, just end with this, I will conclude. Deployees has its own AI. We are reporting from this session. And let’s hypothetical situation, our AI system confuses and says that two of you said, or all of us, something which you didn’t say.

And you go back to Paris, and your boss say, hey, by the way, did you say really that? And said, no, I didn’t say it, but Diplo reported. Now, who is responsible for it, ethically, politically, legally? I’m responsible. I’m director of Diplo. Nobody forced me to establish AI system, to develop AI system. Therefore, we are losing a common sense, which exists since Hammurabi, Napoleonic code, till today. Somebody who develop AI system, and make it available, should be responsible for that. There are nuances now in that, but the core principles are common sense principles. Therefore, in that sense, people, by downloading knowledge, they’re reacting with common sense. I think in governance, AI governance, you should really get back to the common sense, and being in position to explain to five years old what is AI governance. And it’s possible. And I would say this is a major challenge for all of us in this room, and I will say policy community to make AI governance common sense, bottom-up, and explainable to anyone who is using AI.

Kathleen Ziemann: Thank you very much. I don’t see a queue behind the mics yet, and I think we also have someone. That is great. Welcome. Happy to have your questions now towards the panel. It would be great if you could say who you are, from which institution, and also to whom you would like to direct your question.

Audience: Thank you. So, my name is Diane Hewitt-Mills, and I’m the founder of a global data protection office called Hewitt-Mills. For those that don’t know, under the GDPR, there are certain organizations are mandated to appoint a data protection officer, and that’s an individual or an organization that has responsibility for independently and objectively reviewing the compliance of the organization when it comes to data protection, cybersecurity, and increasingly AI. So, I’m a UK qualified barrister. I’ve been working in the area of governance for over 25 years, data protection focused and privacy focused governance, and so I’ve been running this organization for seven years, which I’m you know very proud to do as a sort of female founder. I know I’m a very rare beast, but importantly, I decided five years ago to go for this standard called a B Corp standard, and I don’t know if you’re aware, but B Corp is a standard for organizations that can demonstrate high standards in environmental social governance, ESG, and so my sort of comment or recommendation is we oversee carbon sort of offsets and the sort of efforts of organization in terms of demonstrating ESG, and I had a thought about would it be an idea if organizations could also demonstrate their social offset. So, for example, if you are a tech business or health business using AI, would it be sort of an idea that you document the existing risks, think about foreseeable risks, and think about actually how you could offset those risks in an objective way and to have an independent overseer of that type of activity. I just thought I’d throw that out there to the panelists, because we’re thinking about creative ideas of making sort of AI governance tangible and explainable, and I wondered for example if that were the sort of requirement 15 years ago for social media platforms to demonstrate their social offset, what sort of world we might be in today.

Kathleen Ziemann: Thank you very much. So, I think it was not specifically directed to someone on the panel, so whoever wants to take that question, I’m looking at you, Melinda, but I think it might be relevant for others as well.

Melinda Claybaugh: Yeah, I’m happy to take a start at it. I think that that exact kind, I mean what you’re talking about is really a risk assessment process that is objective and transparent and auditable in some fashion. I think that is the, you’re right, the basis of kind of the GDPR and accountability structure that so many data protection laws have been built on. I think increasingly we see it in the content regulation space, particularly in Europe as well, that there are risk assessments and mitigations and transparency measures that can be assessed by external parties. And interestingly we are seeing that in some early AI regulation attempts. I speak most fluently about what’s going on in the US, but we are seeing very similar structures around documenting, identifying, documenting risks, demonstrating how you’re mitigating them, and then in some fashion making that viewable to some set of external parties. I do think that is a proven and durable type of governance mechanism that makes a lot of sense. I think we still come to the issue, however, of what are the risks that are, and how are they assessed. And I say that because it is a particularly thorny challenge in the AI, particularly in the AI safety space, and you know there’s healthy debates around kind of what risks are tolerable or not. But I do think as a framework that that makes a lot of sense, and there are a lot of professionals who already work in that way, and companies already have those internal mechanisms and structures. So I would be surprised if we didn’t land in a place like, and in fact that’s what the EU AI Act essentially proposes as a structure.

Guilherme Canela: Sorry, just a quick follow-up question, but in that case even if there is no consensus about what are the risks, the transparency that you were saying that you also agree is part of the solution, right? The companies don’t need to be forced in agreeing on the risks, but they need to be transparent in telling the stakeholders what are the issues they consider risks and how they are mitigating, right? Because the problem may be to say this is a risk you need to report on that, but when the requirement is report on how you do risk assessments, then it’s a different ball game, right?

Melinda Claybaugh: Yeah, I think the trick, I’m thinking about this through the lens of an open source model provider, and this is another tricky area of AI governance and regulation. How you govern closed models and open models may be very different, and so we provide, we do all kinds of testing and risk assessment and mitigation of our model, and then we release it for people to build on and add their own data to and build it for their own applications. We don’t know how people are going to use it. We don’t know how they end up using it. We can’t see that, and so there’s a very, we can’t predict how the model will be used. So I think there’s just nuances as we think about this in terms of who’s responsible for what. I do think some of it’s common sense, who’s using it, you know, but so I think that’s part of the value chain issue that people talk about.

Kathleen Ziemann: I see that also, Mlindi wanted to react to the question.

Mlindi Mashologu: I think for me the important thing is, that is why for us as policymakers, we just want everybody to play fair when it comes to AI. You know, there are areas that we understand that I mean self-regulation will be there from the organizations, but all we, it’s important is to make sure that at least we can look on all these risks that are emanating and make sure that we all deal with those risks collectively, both from the private sector as well as from the government, because from us as government, we don’t want to be seen as, you know, doing that hard regulation regulation and all that, which might end up, you know, starting innovation, but we want to make sure that, you know, everybody can be protected, but while also from the private sector point of view, you can also derive the value that you want to derive from the AI systems.

I think that’s what is important, but also the other area that I’ve touched on before, the area of explainability, it’s actually very important because, I mean, you actually use these models and, I mean, they might have, you know, some decisions that can be very harmful to human lives, so that’s why then we say that you need to have, you know, these decisions being explainable, but then it also touches to say that whenever the model does a decision, it needs to have considered the broad aspects of data sets, you know, from various demographics as well to make sure that you don’t look on a few demographics and say that, okay, no, the model can actually take a decision based on, you know, the small amount of data that you actually train the model on.

Kathleen Ziemann: Yes, definitely, and that’s also, I think, a big achievement of the open source community to really stress that factor of explainability, what is happening actually within the data and within the models. Yes, we would love to move further on with on-site questions and we switch to this microphone. Yes, happy to hear your question. Thank you very much.

Audience: So, my name is Kunle Olorundare. I’m from Nigeria and I’m the president of Internet Society, the Nigerian chapter, so to say, and we are into advocacy and stuff like that, so to say. So, my concern is this. I know it’s just about the right time for us to start discussing AI governance. There’s no gains in that. However, there are issues that we really need to start to look at critically and one of those issues has to do with the way data is being collected. I listened to Jovan the other time when he was emphasizing the issue of knowledge. I agree 100%, right, because the end product of artificial intelligence is knowledge, so to say.

However, how we gather this data, I think it’s very important. What I’m saying that is because we are looking at an AI that is going to, you know, be inclusive, that we be able to have value for every community, so to say, and you will agree with me that this data gathering is being done by experts and for every individual person, right, everybody has its own bias.

So, I believe that whatever data you gather is as inherently flawed as the bias of the person that has gathered the data in the first place. So, we need to start looking at how we are going to bring inclusivity in how we bring all this data together, considering all the multistakeholders. I think that is very important. That is on one hand. And for me, I think it will get to a stage that even this AI we are talking about is going to become DPG, data public routes. I’m saying that because it’s going to be available to everybody and everybody should be able to use it for whatever purpose they want to use it for.

But before we get there, how do we ensure that we put everybody on the same pedestal in the sense that we need to have a framework that is universal? I’ve listened to Melinda when she was talking about the framework and I began to see some kind of, okay, different frameworks coming from, you know, different stakeholders. So, we need to sit down and bring all these, you know, frameworks together so that we can have a universal framework that’s going to speak to issues that bother everybody and the AI we’re going to have at the end of the day is going to be universal and it’s going to be able to take care of everybody’s concern. So, I want the panelists to react to this. I think Jovan and probably Melinda should be able to react to this. Thank you very much.

Kathleen Ziemann: Thank you very much. Jovan, do you want to go first?

Jovan Kurbalija: Just a quick, excellent point and question. Two comments. Both are controversial, but first one is more controversial. We have had a lot of discussion of cleaning biases and I’m not speaking about illegal biases, biases which are basically insulting people’s dignity. That’s clear. That should be dealt even by law, but let alone that. But we should keep in mind that we are bias machines. I am biased. My culture, my age, my, I don’t know, hormones, whatever, are defining what I’m saying now or what questions you ask.

Therefore, this obsession, which is now calming down, but it existed, let’s say, one or two years ago with cleaning bias, was very dangerous. Yes, illegal biases, biases that threaten communities, definitely. But I would say at that point we have to bring a more common sense again into this. Second point that you mentioned is about knowledge. Knowledge, like a bias, should have attribution. Financial, legal, this, the question you ask, is your knowledge built on your understanding and other things? The problem currently in the debate is that we are throwing our knowledge into some pot where we don’t know.

It’s like, I call it AI Bermuda Triangle, and it’s disappearing and suddenly we are revisiting it. even testing big systems in our lab in deep layer testing where we put very specific knowledge, contextual, and we realize that it is taken, repackaged, and then not sold yet to us but maybe in the future. That’s a critical issue. Your knowledge, knowledge of local community in Africa, Ubuntu, oral knowledge, written knowledge, belongs to somebody or should be attributed, shared with the universal framework, definitely, but attributed. That’s a critical issue when it comes to knowledge and also to your previous question what we should do with the knowledge.

And again, instruments are there and the risk is that confusing AI governance discussion, well everything and anything, magical realism a bit, is basically missing the core points and it is like baby crying. Instead of answering the question with existing tools, we are giving the toys to the baby, which is discussion on ethics, philosophy, which I love, I love philosophy, but there are some issues that we can solve with existing instruments related to your question, question of bias, and question of knowledge.

Kathleen Ziemann: Melinda, before you react as well, I look at Jhalak’s face and I see that you might not agree with all of the points mentioned by Jovan, especially possibly the one that bias and data can be neglected. Is that something you’re thinking about?

Jhalak Kakkar: I mean, I don’t disagree with him actually. I think there is a reality that there is a level of bias in all of us and it’s not that the world is completely unbiased, it’s not that when judges make decisions there’s no bias over there, but so and ultimately AI is trained on data from this world and biases will get embedded into that. They’re trained on existing data sets which capture societal bias. I think the difference is with human decision-making in many contexts, we have set up processes and systems and there has to be disclosure of the thinking and reasoning going into it and that can be whetted if someone raises an objection. I think with AI systems that’s the challenge, is explainability has in many contexts of various kinds of AI systems has been challenging to establish and I think that’s a question that is still being grappled with. So I think that, you know, and I think disclosure of use of AI systems in various contexts and whether someone knows that an AI system is being used and they are being subject to it and then the kind of bias that comes into decision-making that impacts them, I think that’s the other piece to it.

Kathleen Ziemann: Thank you. Melinda?

Melinda Claybaugh: Just two quick thoughts. I think it is critical that AI works for everyone and so part of that is making sure that we do have the data, that there is a way of either training a model on the data or fine-tuning model on as representative of data as possible. I think that’s a foundational key concept. I also think that there needs to be a lot of education around AI outputs and so when people are interacting with AI, they understand that what they’re getting back may not be the truth, right?

Like what is it? It’s actually just a prediction about the next right word and so I think we’re at the very early stages of this in society and so our expectations of what it is, what it should be, what these outputs should be relied on for, I think is very evolving. I do agree that when AI is being used to make decisions about people or their eligibility for services or jobs that there is an extra level of concern and caution and requirements that should be added in terms of a human in the loop or transparency around a decision was made. I absolutely understand the concerns around that so I think as a society we get more experience and understand these tools more and what they should be used for and what they should not be used for. I think these questions will get more sorted out.

Kathleen Ziemann: Thank you very much. So at IGF we want to be as inclusive as possible that’s why we also have the online participation for people that can’t be here and that can’t be also maybe afford to travel here and we have our online moderator Pedro behind the mic here. Pedro, if you could give us like two relevant questions from the online space that need to be addressed to the panel that would be really great.

Online moderator: Perfect, thanks. We have a question from Grace Thompson directed to Jhalak and then Melinda. What are the panelists views about the consensus gathered in the concept of Europe framework convention on AI, human rights, democracy and the rule of law. The first international treaty and legally binding document to safeguard people in the development and oversight of AI systems. We at the Center for AI and Digital Policy advocate for endorsement of this international AI treaty which has 42 signatories to date including non-European states.

Kathleen Ziemann: It’s not really going through Pedro, we have difficulties to understand you I think as well. Can you give us maybe the two main words that need to be discussed. Was it the EU AI Act in the first one?

Online moderator: The concept of Europe framework convention on AI, human rights, democracy and the rule of law. The comments on the panel for Jhalak and Melinda.

Kathleen Ziemann: Okay, thank you very much. I think that went through okayish. Jhalak, do you want to react?

Jhalak Kakkar: Are they asking about the framework? Yes. So I think you know there has been a lot of conversation globally around what is the right approach to take. You know Melinda was saying we need to think about what systems need you know more scrutiny versus others, systems that are impacting individuals and people directly versus those that are not.

There’s been a whole conversation which we’ve referenced earlier in this in this dialogue around you know innovation versus regulation and what is the right level of you know regulation to come in with this point. What is too heavy? What is not enough? And I think I don’t have the answer to that right. I think in different contexts it’s going to be different. In countries which have a high regulatory capacity and context there is more that they can do and implement. In countries that don’t we have to frame regulation and laws which work for those sort of regulatory and policy contexts.

But what I think is really important is you know at occasions like for instance the India AI Impact Summit which is an opportunity because India is you know trying to emerge as a leader in the global majority to really bring together thinking from civil society, academia, researchers, industry, governments particularly from the global majority to talk about what what would be the right way forward. Would it be borrowing from ideas that have developed in another context and perhaps there are ideas that are relevant to pick up from there.

But what is contextually and locally useful and relevant from within the contexts we come from right. I mean you know places like India and South Africa may have a lot of AI that is being developed say a Sloan-Kettering health diagnostic tool which is then brought in and deployed in the Indian context. But demographics are different. You know the kind of testing and treatments available at our primary health care, secondary health and tertiary health care settings are different.

You know so there are a lot of differences. So how do we think about something like that which may not be really a topic of discussion in in other parts of the world. So I think in India and places like South Africa we may have slightly different challenges to grapple with and I think it’s very important that those conversations happen as well.

Mlindi Mashologu: Yeah, I think from the South African point of view as my colleague has just highlighted here, you know one of the areas is the areas of human rights and it’s enshrined in the Constitution. So whatever you do from the technology point of view you need to make sure that it doesn’t really impact the human rights you know as well as the Bill of Rights. So it’s one of the things that we’re trying to do to make sure that whenever then you actually put these types of technologies they are not infringing on the rights of people.

But also you’ll find that you know you do have you know some of the other laws that we’ve got like your protection of personal information act. So which says that you know you can’t just use my information we didn’t need. But then how do we then make sure that we can use your information for the public good. So now you’re competing with these two laws, one is trying to use this information for the better good but one is saying that you can’t just use my information.

So I think it’s going to be quite a balancing act that we’re trying to do to say that one of the – some of the things that we can use to make sure that we can drive innovation but what are the things that we need to do to make sure that we don’t also infringe on the human rights as well as, you know, the information of the people.

Kathleen Ziemann: Yes. Thank you very much. I see there’s further questions from the floor. Jovan, you will be reacting briefly because I think it would be also great.

Jovan Kurbalija: There are two questions concrete on the UAE Act and Council of Europe Convention. Just quickly, those are very interesting points. You moved fast and probably too far. Now as we’re hearing from Brussels there is a bit of revisiting of some provision, especially on the defining high-risk models through FLOPs and other things. Council of Europe is an interesting organisation.

They adopted Convention on AI but they’re an interesting organisation because under one roof, first you have the Convention but you have also human rights coverage, next is also human rights court, you have cybercrime, Council of Europe Convention is host of Budapest Convention. You have science. Therefore, it’s one of the rare organisations that interplay between existing silos when it comes to AI could be basically bridged within one organisation. Those are just two points on UAE Act and the Council of Europe Convention.

Kathleen Ziemann: Thank you very much. Let’s pull last two questions from the floor. I see two people standing behind the mic over there.

Audience: Yes, thank you. My name is Pilar Rodriguez, I’m the youth coordinator for the Internet Governance Forum in Spain. I wanted to follow up a little on what Ms Jhalak was saying about how countries can achieve AI governance and AI sovereignty if this doesn’t lead to, let’s say, an AI fragmentation. I’m not just thinking from a regulatory perspective because we have the AI Act in Europe, the California AI regulation, China has this regulation, so doesn’t that lead to more fragmentation and coming from the youth perspective, is there a way to ensure that we have, let’s say, a global minimum so that future generations can be, let’s say, protected?

Kathleen Ziemann: Thank you very much. Let’s also take the next question of the person behind you.

Audience: Hi, I’m Anna from R3D in Mexico. It’s going to sound like I’m making a comment more than a question, but I promise that for Melinda there’s going to be a question because I was very concerned of hearing how there was this underestimation about the risks of AI, making it sound like it’s something hypothetical and not that it was actually materialized in several examples around the world. And Jovan was mentioning this topic of knowledge and education while at the same time speaking about illegal biases when I think that in reality there has been several examples of how classism, racism, misogynist is affecting how people can access basic services around the world or how police are predicting who is a suspect or not, so we shouldn’t disinform people about the actual risks.

But the question to Melinda would be related to the emergency crisis that we are living and since she mentioned that companies such as META are doing these risk assessments, I wonder how META is planning to self-regulate, for example, when it hasn’t done environmental or human rights assessments, when it has established hyperscale data centers in places like the Netherlands that had made people publicly pressured for them to stop being constructed there, so then you move them to global south countries or to Spain in that case, so that all the issues with extractivism, with hydric crisis, with pollution arrive to other communities where there hasn’t been any consultancy but you are claiming that there has. That would be my question.

Kathleen Ziemann: Thank you very much. So two relevant points. One would be the point of fragmentation and the other one of global AI justice, basically. Melinda, do you want to react first?

Melinda Claybaugh: Sure. I mean, I can’t really speak to the data center piece more, I think that was your question around kind of basically the energy needs for AI and where data centers are placed. Essentially, I can’t really speak to that. I can say that I think we all know that the AI future is going to require a lot of energy and I think that there are a lot of questions about where the energy needs are and where the solutions to those energy needs are going to come from, but I can’t speak in any detail about how particular decisions are made.

Kathleen Ziemann: And in terms of fragmentation, that was part of the first question, right, the fragmentation of AI governance, having so many initiatives, so many different stakeholders. That is also basically then the question, how could you, coming from different sectors and regions, cooperate more in that area? Is there an idea here on the panel how that could look like? Who would like to react on that?

Jovan Kurbalija: It was a question, how to avoid?

Kathleen Ziemann: How different sectors and regions could cooperate even better on AI governance? How to counteract against that fragmentation that might also occur from the blooming landscape of AI governance?

Jovan Kurbalija: We have to define what is fragmentation, you know. Having the AI adjusted to Indian, South African, Norwegian context, German, Swiss, whatever, is basically fine. Probably the communication or exchanges should be done by some sort of standardisation about the weights. Weights are basically the key element into AI systems. Therefore we may think about some sort of standards and to avoid the situation that we have with social media. If you are on one platform, you cannot migrate with your network to the other. Now the Digital Services Act in the UAE is trying to mitigate that. But the same thing may relate to the AI. If my knowledge is qualified by one company and I want to move to the other platform, company, whatever, there are no tools to do that. My advice would be to be very specific and to focus on the standards for the weights and then to see how we can share the weights and in that context how we can share the knowledge.

Kathleen Ziemann: So joint standardisation, Mlindi?

Mlindi Mashologu: I think from the continent we started this governance as early as 2020-2021 when we developed the AI blueprint for the continent and I think from there on the African Union also went into developing the AI strategy but also the individual member countries are also developing their policies and strategies. So I think there is not much fragmentation but it’s such that from the grassroots level you’ll find that each country will have its particular priorities that they would like to focus on. But I think generally and if you were to look in all the published policies and strategies and legislations you’ll find that AI normally generally addresses more of the core principles, the issues of ethics, the issues of bias, the issues of risk and I think even from the South African point of view, from the policy development point of view, the one that we are currently finalising now, we are actually advancing some of those aspects as well. I think we are looking on this thing but from the grassroots level, from the country level, you’ll find that you’re not going to have exactly the same but it’s not that they are always the same, but it’s such that there are different priorities as well.

Kathleen Ziemann: Thank you.

Guilherme Canela: Do you want to add anything?

Jhalak Kakkar: Yeah. Yeah. I think there is a concern that in the drive for innovation there’s a race to the bottom in terms of the adherence to responsible AI, ethical AI, rights frameworks. We have several existing documents ranging from the UDHR to the ICCPR which can be interpreted, which through international organisations, norm building can happen, which sets a certain baseline. I think the IGF, as WSIS plus 20 review happens, I think the IGF should be strengthened to really help with not only agenda setting of the action. lines, but also as a feedback loop into CSTD, the WSIS forum, and other mechanisms where there is holistic sort of input from multi-stakeholders going into these processes, which accounts for many of the concerns that have been raised, you know, ranging from environmental concerns to, you know, impact of extraction in global majority contexts.

It could be questions of labor for AI, you know, whether it’s labeling and, you know, a lot of, like, worker-related concerns. So I think all of this needs to be surfaced, and, you know, then these conversations need to feed back into the agenda setting as well as, you know, the final outcomes that we have. Because I think that level of international coordination, both at the multilateral level but at the multi-stakeholder level, is important. And we have to all sort of come together and work together to find sort of ways to set this common baseline so that we don’t sort of, in the race for, you know, getting ahead, we don’t sort of lose focus on our common values that we have articulated in documents like the UDHR.

Guilherme Canela: Thank you. So now we are walking towards the end, so if the online moderator has a very straightforward question.

Online moderator: Yes, we have one from Michael Nelson about the two sectors. The two sectors that are spending the most money on AI are the finance sector and the military. And we know very little about their successes and failures, so he would like from the panelists, especially from Jovan and Melinda, what are their fears and hopes about those two sectors?

Jovan Kurbalija: The question is about AI?

Online moderator: Especially finance sector and the military.

Guilherme Canela: Okay, so fears and hopes, finance, military, but then I will give the floor back to all of you. One minute to, if you want to comment on that, and within this one minute, what is your key takeaway of this session? So let’s start with you, Melinda.

Melinda Claybaugh: Okay, I don’t have an answer on the finance and military sector hopes and fears, to be honest. We are very focused on adding AI personalisation to our family of apps and products. I’ll leave the finance and military to others. On the key takeaway from the session, I think it really is interesting to take stock of where we are at these meetings. I’ve been at the last couple IGFs and I think the pace of discussion and the developments in the space are really fast, right, and fast-moving. And so I’m encouraged, and I would encourage us all to keep having these conversations. I think multi-stakeholder will be the word that everyone is going to say here, but it really is a unique important role that the IGF plays in bringing people together. I know we have a lot of Meta colleagues here. We take everything we hear here back home and talk to people and inform our own direction. And so I think let’s keep having these conversations. I think the convening power is the most important contribution right now in the space of bringing these particular voices together.

Kathleen Ziemann: Thank you. Jovan?

Jovan Kurbalija: On military and AI, it’s obviously getting unfortunately central stage with the conflicts, especially Ukraine and Gaza, together with the question of use of drones. There are discussions in the UN with the laws, little autonomous weapons or robot killers, and then the Secretary General has been very, very vocal for the last five years to ban killer robots, which basically is about AI. What is my take from it? Awareness building, education. At Diplo, we run AI apprenticeship program, which is explaining AI by developing AI. People learn about AI by developing their own AI agents. And I would say let’s demystify AI, but still enjoy in its magic.

Kathleen Ziemann: Thank you. Jhalak?

Jhalak Kakkar: Yeah, I think, you know, my sort of final thoughts would be, I think we need to learn from the past, the successes of the past. Things like, you know, the multi-stakeholder model, successes we’ve seen in international cooperation. But we also need to learn from the past in terms of mistakes that have been made around governance, around technology, and not sort of repeat those. And I think we need to continue to work together to build, you know, a robust and wholesome, impactful, beneficial digital ecosystem.

Kathleen Ziemann: Thank you. Mlindi?

Mlindi Mashologu: I think from my side, I just want to say that, you know, AI need to be anchored in the human rights. We need to make sure that the technology empower the individuals. But also, when it comes to innovation, we need to do that responsibly by looking on the adaptive governance models, which includes like your regulated sandboxes. But I think the last point that I want to touch on is the issue of collaboration, you know, aligning, you know, the national regional as well as the global efforts in terms of ensuring that, you know, the AI benefits, you know, are spread across, you know, everybody in our society. I think those are my final thoughts.

Guilherme Canela: Thank you very much. So, now I have the very difficult task to try to summarize it, which would be impossible. But just the disclaimer, whatever I’m going to say now is full responsibility of Guilherme Canella’s chat. It’s not any of you, right? But I think there is an interesting element in this conversation, that when many years ago I was involved in some of these similar debates, AI governance, etc., the first thing that appeared was bias. And bias appeared very late in our panel, which is a good sign, because the first things that appeared were the processes.

Even if we disagree, right? The dichotomy, the eventually false dichotomy between innovation or risks, but all those keywords that we spoke, risks, innovation, public goods, data governance, the knowledge, bringing knowledge back, those are actually more structured frameworks that look into the real but this very specific issues of bias or disinformation of conspiracy theories and so on. So, I think this is a good sign for all of us, even if we disagree, as you noticed, that we are looking into something that we can take to the next level of conversation from a governance point of view.

Because when we are too much concentrated in the specific pieces of content rather than the processes, then the conversation becomes very difficult, because it’s related to polarizations, to specific opinions, which everyone has the right to have on what is false, what’s not false, what is dangerous, on what is not danger, while when we are concentrating on transparency and accountability on public goods, etc., all those keywords, they come with lots of interesting knowledge behind them on how we transform them in concrete governance, which doesn’t mean only governmental governance, can be self-regulation, can be co-regulation, and so on. But we also, for obvious reasons, for lack of time, left important things out of the conversation that also need to be part of governance frameworks. For example, the issue of energy consumption of these machines should be part of governance frameworks, and it appeared very late today because of the time and so on.

But I do think that the panel did a good job in putting also some of the divergences of this conversation, which is part of the game. Last thing I want to say is that, and this is not on the shoulders of the panelists or my co-moderator, I do invite you to think that being innovative is to leave no one behind in this conversation. When Eleanor Roosevelt was holding the Universal Declaration of Human Rights on that famous photo, the conversation there was, this was the real innovation, how we came together and put those 33 articles in a groundbreaking way that is not solved until today. So what we really require is an innovation that includes everyone and not only the 1%. Thank you very much. Thank you, my co-moderator. It was a pleasure.

K

Kathleen Ziemann

Speech speed

142 words per minute

Speech length

1666 words

Speech time

701 seconds

AI governance is blooming but fragmented with different levels of engagement across sectors and regions

Explanation

Ziemann describes the current AI governance landscape as having numerous emerging tools, principles, processes and bodies globally, but notes this creates a fragmented environment with varying levels of participation across different sectors and regions. She emphasizes that while there are many initiatives, there are different possibilities for engagement.

Evidence

Examples include OECD AI principles (2019), UNESCO recommendations (2022), voluntary commitments by AI companies, EU AI Act, G7 Hiroshima AI process, G20 declarations, Africa AI declaration in Kigali, and UN Global Digital Compact

Major discussion point

Current State of AI Governance Landscape

Topics

Legal and regulatory | Development

M

Melinda Claybaugh

Speech speed

151 words per minute

Speech length

1982 words

Speech time

783 seconds

We are at an inflection point with many frameworks but questions remain about implementation and effectiveness

Explanation

Claybaugh argues that while there’s no lack of governance frameworks and principles in the AI space, there are still many questions and concerns about their practical implementation. She suggests we need to take stock of what has been established and consider broadening the conversation beyond just risk to include opportunity and innovation.

Evidence

Meta has put out a frontier AI framework for assessing catastrophic risks, but there’s still disagreement on what risks are and how to quantify them. EU AI Act is facing implementation problems with policymakers reconsidering certain aspects

Major discussion point

Current State of AI Governance Landscape

Topics

Legal and regulatory | Economic

False dichotomy between innovation and risk management – both must go hand in hand

Explanation

Claybaugh argues that the conversation has been overweighted toward risk and safety concerns, and suggests we need to talk about enabling technology and innovation alongside risk management. She emphasizes the need to broaden the conversation to include the right voices and representation from different stakeholders.

Evidence

Different regions and countries focus more on innovation and opportunity while others focus on safety and risks. There’s lack of technical and scientific agreement about risks and measurement

Major discussion point

Innovation vs. Risk Management Balance

Topics

Economic | Legal and regulatory

Agreed with

Agreed on

False dichotomy between innovation and risk management – both must go hand in hand

Disagreed with

Disagreed on

Innovation vs. Risk Management Balance – False Dichotomy Debate

Existing laws and frameworks already address many AI-related harms, need to assess fitness for purpose

Explanation

Claybaugh contends that there are already legal frameworks in place that pre-date ChatGPT covering issues like copyright, data use, misinformation, and safety. She suggests focusing on whether these existing frameworks are fit for purpose with new technology rather than creating new regulation specifically for the technology itself.

Evidence

Laws around harms people discuss regarding copyright, data use, misinformation and safety already exist and pre-date ChatGPT

Major discussion point

Regulatory Approaches and Implementation

Topics

Legal and regulatory | Human rights

Disagreed with

Disagreed on

Existing Legal Frameworks vs. New AI-Specific Regulation

Risk assessment processes should be objective, transparent, and auditable similar to GDPR accountability structures

Explanation

Claybaugh supports the idea of objective risk assessment processes that can be viewed by external parties, similar to GDPR’s accountability structure. She sees this as a proven and durable governance mechanism that makes sense for AI, though notes challenges around defining and assessing risks.

Evidence

GDPR accountability structure and similar approaches in content regulation space in Europe with risk assessments, mitigations and transparency measures

Major discussion point

Regulatory Approaches and Implementation

Topics

Legal and regulatory | Human rights

Agreed with

Agreed on

Importance of transparency and explainability in AI systems

Different governance approaches needed for open source vs. closed AI models

Explanation

Claybaugh explains that governance challenges differ between open and closed models, particularly regarding responsibility and oversight. With open source models, companies can test and assess risks before release, but cannot predict or control how others will use the models after release.

Evidence

Meta provides open source models where they do testing and risk assessment before release, but people build on them with their own data for applications that Meta cannot see or predict

Major discussion point

Regulatory Approaches and Implementation

Topics

Legal and regulatory | Economic

AI must work for everyone requiring representative training data and fine-tuning

Explanation

Claybaugh emphasizes that for AI to be effective for all users, it’s critical to train models on data that is as representative as possible or fine-tune models appropriately. She also stresses the need for education about AI outputs so people understand the limitations and nature of AI responses.

Evidence

AI outputs are predictions about the next right word, not necessarily truth, and society is in early stages of understanding what AI should be relied on for

Major discussion point

Data Bias and Inclusivity

Topics

Development | Human rights

Convening power of IGF is crucial for bringing diverse voices together in AI governance discussions

Explanation

Claybaugh highlights the unique and important role that IGF plays in bringing different stakeholders together for AI governance conversations. She notes that Meta takes insights from these discussions back to inform their own direction and emphasizes the value of continued multi-stakeholder dialogue.

Evidence

Meta has colleagues attending IGF sessions and they take learnings back home to inform company direction

Major discussion point

Multi-stakeholder Participation and Process

Topics

Legal and regulatory | Development

Agreed with

Agreed on

Need for meaningful multi-stakeholder participation in AI governance

M

Mlindi Mashologu

Speech speed

171 words per minute

Speech length

2193 words

Speech time

766 seconds

Need for sector-specific policy interventions that are technically informed and locally relevant

Explanation

Mashologu argues that AI governance cannot be one-size-fits-all and requires different approaches for different sectors. He emphasizes that regulating AI in financial services would be different from regulating it in agriculture, requiring sector-specific interventions that are both technically informed and locally relevant.

Evidence

AI in financial services requires different regulation than AI in agriculture

Major discussion point

Current State of AI Governance Landscape

Topics

Legal and regulatory | Development

Agreed with

Agreed on

AI governance must be contextually relevant and locally adapted

Importance of bringing voices from the global south and underrepresented communities to governance dialogues

Explanation

Mashologu emphasizes South Africa’s G20 presidency focus on expanding participation in AI governance discussions by bringing more voices from the African continent, global south, and underrepresented communities to the center of AI governance dialogue. He argues this is essential for multilateral, multi-stakeholder, and multi-sectoral approaches.

Evidence

South Africa’s G20 presidency is working on expanding scope of participation and developing a toolkit to reduce inequalities connected to AI use

Major discussion point

Multi-stakeholder Participation and Process

Topics

Development | Legal and regulatory

Agreed with

Agreed on

Need for meaningful multi-stakeholder participation in AI governance

Should focus on adaptive governance models including regulatory sandboxes for responsible innovation

Explanation

Mashologu advocates for context-aware regulatory innovation that includes regulatory sandboxes, human interlock mechanisms, and adaptive policy tools that can be calibrated to center-specific risks and benefits. He emphasizes the need for responsible innovation while ensuring AI empowers individuals.

Evidence

Regulatory sandboxes, human interlock mechanisms, and adaptive policy tools that can be calibrated to specific contexts

Major discussion point

Innovation vs. Risk Management Balance

Topics

Legal and regulatory | Economic

Need for sufficient explainability in AI decisions that impact human lives and livelihoods

Explanation

Mashologu argues for requirements that AI decisions, especially those impacting human lives and livelihoods, must be sufficiently explainable. He emphasizes the right to understand how AI systems make decisions in critical areas and the need for broad demographic representation in training data.

Evidence

Examples include credit scoring, predictive policing, and healthcare diagnostics where people need to understand how AI decisions are made

Major discussion point

Human Rights and Ethical Considerations

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

Importance of transparency and explainability in AI systems

Human-in-the-loop mechanisms essential for high-risk domains with clear intervention thresholds

Explanation

Mashologu advocates for human-in-the-loop learning in AI system development from design through deployment, where humans must guide and when needed override automated systems. This includes reinforcement learning with human feedback and clear thresholds for interventions in high-risk domains.

Evidence

Reinforcement learning with human feedback and clear thresholds for interventions in high-risk domains

Major discussion point

Human Rights and Ethical Considerations

Topics

Human rights | Legal and regulatory

AI governance must be anchored in human rights and ensure technology empowers individuals

Explanation

Mashologu emphasizes that AI governance must be grounded in human rights principles as enshrined in South Africa’s Constitution and Bill of Rights. He stresses that whatever technology is implemented should not infringe on people’s rights while still enabling innovation and public good applications.

Evidence

South African Constitution and Bill of Rights, Protection of Personal Information Act creates tension between using information for public good and protecting individual information

Major discussion point

Human Rights and Ethical Considerations

Topics

Human rights | Legal and regulatory

AI governance should be grounded in data justice principles with focus on economic equity and environmental sustainability

Explanation

Mashologu argues that South Africa’s regional approach to AI governance is grounded in data justice principles that put human rights, economic equity, and environmental sustainability at the center of AI development. He recognizes the impact of climate change on human rights and the need to address historical inequities.

Evidence

Recognition of climate change impacts on human rights and environmental sustainability, addressing historical inequities

Major discussion point

Environmental and Social Justice

Topics

Human rights | Development | Sustainable development

Regional frameworks like African Union AI strategy should align with global governance efforts

Explanation

Mashologu explains that South Africa’s AI governance approach leverages existing regional frameworks including the African Union data policy framework and emerging AI strategy, NEPAD’s science and technology frameworks, and regional policy harmonization through SADC. He sees regional integration as foundational to global governance agendas.

Evidence

African Union data policy framework, NEPAD science and technology innovation frameworks, SADC regional policy harmonization

Major discussion point

Global Cooperation and Standardization

Topics

Development | Legal and regulatory

G20 presidency focuses on developing toolkit to reduce AI-related inequalities from global south perspective

Explanation

Mashologu describes South Africa’s G20 presidency championing the development of a toolkit to reduce inequalities connected to AI use, particularly from a global south perspective. The toolkit seeks to identify structural and systematic ways AI can both amplify and redress inequality.

Evidence

G20 agenda includes developing toolkit to identify structural and systematic ways AI can amplify and redress inequality, especially from global south perspective

Major discussion point

Global Cooperation and Standardization

Topics

Development | Economic

J

Jhalak Kakkar

Speech speed

168 words per minute

Speech length

2889 words

Speech time

1031 seconds

Need for meaningful multi-stakeholder input in AI governance creation, not just participation as a matter of form

Explanation

Kakkar emphasizes the importance of multi-stakeholder input in creating AI governance mechanisms, but stresses that participation must be meaningful and actually impact outcomes and outputs, not just be done as a formality. She argues that different stakeholders bring different perspectives that lead to better governance outcomes.

Evidence

Different stakeholders sitting at different parts of the ecosystem bring forth different perspectives

Major discussion point

Multi-stakeholder Participation and Process

Topics

Legal and regulatory | Development

Agreed with

Agreed on

Need for meaningful multi-stakeholder participation in AI governance

False dichotomy between innovation and risk management – both must go hand in hand

Explanation

Kakkar argues against creating a false sense of dichotomy between focusing on risks versus innovation, contending that both must go hand in hand. She warns against the mistakes of not developing governance mechanisms from the beginning and emphasizes that regulation and governance are not bad words.

Evidence

Past mistakes of letting technology develop without governance mechanisms from the beginning, leading to band-aid solutions later

Major discussion point

Innovation vs. Risk Management Balance

Topics

Legal and regulatory | Economic

Agreed with

Agreed on

False dichotomy between innovation and risk management – both must go hand in hand

Disagreed with

Disagreed on

Innovation vs. Risk Management Balance – False Dichotomy Debate

Need for AI impact assessments and audits to understand societal impacts from the beginning

Explanation

Kakkar advocates for implementing AI impact assessments from a socio-technical perspective to understand impacts on society and individuals. She suggests mechanisms like sandboxes and audits can be implemented in light-touch ways to avoid creating path dependencies that require band-aid solutions later.

Evidence

Need to understand harms and impacts rather than going in circles about not knowing what risks are

Major discussion point

Regulatory Approaches and Implementation

Topics

Legal and regulatory | Human rights

Disagreed with

Disagreed on

Existing Legal Frameworks vs. New AI-Specific Regulation

Multi-stakeholder model should be strengthened through IGF and other mechanisms for holistic input

Explanation

Kakkar argues that the IGF should be strengthened as part of the WSIS plus 20 review to help with agenda setting and serve as a feedback loop into CSTD, WSIS forum, and other mechanisms. She emphasizes the need for holistic multi-stakeholder input that addresses various concerns from environmental to labor issues.

Evidence

Need to address environmental concerns, impact of extraction in global majority contexts, labor for AI including labeling and worker-related concerns

Major discussion point

Multi-stakeholder Participation and Process

Topics

Development | Legal and regulatory

International coordination needed to set common baseline while respecting local contexts

Explanation

Kakkar emphasizes the need for international coordination both at multilateral and multi-stakeholder levels to establish a common baseline based on shared values like those in the Universal Declaration of Human Rights. She warns against a race to the bottom in responsible AI adherence while competing for innovation leadership.

Evidence

Existing documents ranging from UDHR to ICCPR can be interpreted through international organizations for norm building

Major discussion point

Global Cooperation and Standardization

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

AI governance must be contextually relevant and locally adapted

Transparency and explainability crucial when bias affects decision-making systems

Explanation

Kakkar acknowledges that bias exists in all human decision-making but argues that AI systems present unique challenges because explainability has been difficult to establish in many AI contexts. She emphasizes the importance of disclosure when AI systems are used and people are subject to biased decision-making that impacts them.

Evidence

Human decision-making has processes and systems with disclosure of thinking and reasoning that can be challenged, but AI systems lack this explainability

Major discussion point

Data Bias and Inclusivity

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

Importance of transparency and explainability in AI systems

Disagreed with

Disagreed on

Approach to AI Bias Management

J

Jovan Kurbalija

Speech speed

147 words per minute

Speech length

2050 words

Speech time

836 seconds

AI has become a commodity with 434 large language models in China alone, shifting the risk landscape

Explanation

Kurbalija argues that AI has transformed from magical technology to affordable commodity in just three years since ChatGPT’s release. He notes that AI development has become accessible, with the ability to develop AI agents in under five minutes, fundamentally shifting discussions about risks and governance from exclusive lab research to widespread accessibility.

Evidence

434 large language models in China as of the session date, ability to develop AI agent in 4 minutes 34 seconds compared to years of research previously required

Major discussion point

Current State of AI Governance Landscape

Topics

Economic | Legal and regulatory

AI is about knowledge, not just data – need to shift governance language back to knowledge

Explanation

Kurbalija argues that AI governance discussions have shifted away from knowledge to focus primarily on data, but AI is fundamentally about knowledge creation and preservation. He points out that WSIS documents originally emphasized knowledge, but this has been cleaned out of recent documents like the Global Digital Compact in favor of data-centric language.

Evidence

WSIS documents from Geneva and Tunis emphasized knowledge as key term, but 20 years later knowledge is absent from GDC and current WSIS documents which only mention data

Major discussion point

Knowledge vs. Data Framework

Topics

Legal and regulatory | Development

Disagreed with

Disagreed on

Knowledge vs. Data Framework Priority

Knowledge should have attribution and belong to communities rather than disappearing into AI systems

Explanation

Kurbalija argues that knowledge, including local community knowledge like Ubuntu and oral traditions, should be attributed and shared rather than disappearing into what he calls an ‘AI Bermuda Triangle.’ He emphasizes that knowledge belongs to someone and should be attributed even when shared through universal frameworks.

Evidence

Testing in their lab shows specific contextual knowledge being taken, repackaged, and potentially sold back. Examples include Ubuntu, oral knowledge, and written knowledge from local communities in Africa

Major discussion point

Knowledge vs. Data Framework

Topics

Human rights | Development

Disagreed with

Disagreed on

Knowledge vs. Data Framework Priority

Risk of knowledge centralization and monopolization similar to early internet development

Explanation

Kurbalija warns that there’s a risk of knowledge being centralized and monopolized in AI systems, similar to what happened in the early days of the Internet where the promise that anyone could develop digital solutions ended up with only a few being able to do it. He suggests this historical wisdom should inform AI governance solutions.

Evidence

Early Internet experience where initial promise of universal access to development ended with concentration among few players

Major discussion point

Knowledge vs. Data Framework

Topics

Economic | Development

AI responsibility should follow the eternal legal principle since Hammurabi’s code that developers are responsible for their products and activities.

Explanation

Kurbalija argues that AI governance should return to common-sense principles that have existed since ancient times, citing Hammurabi’s law about builders being responsible for house collapses. He contends that whoever develops and deploys AI systems should be responsible for their outcomes, and that AI governance should be explainable to a five-year-old.

Evidence

Hammurabi’s law from 3,400 years ago about builder responsibility, the Napoleonic code, and a hypothetical example of Jovan’s responsibility for Diplo’s AI system making false reports about session participants.

Major discussion point

Innovation vs. Risk Management Balance

Topics

Legal and regulatory | Human rights

Need for joint standardisation, particularly around AI weights sharing to avoid platform lock-in

Explanation

Kurbalija suggests that to avoid fragmentation while allowing local AI adaptation, there should be standardisation around AI weights sharing. He warns against repeating social media platform problems where users cannot migrate their networks between platforms, advocating for standards that allow knowledge portability between AI systems.

Evidence

Current social media platform lock-in, where users cannot migrate networks, EU Digital Services Act trying to address this issue

Major discussion point

Global Cooperation and Standardisation

Topics

Legal and regulatory | Economic

Agreed with

Agreed on

AI governance must be contextually relevant and locally adapted

Need to distinguish between illegal biases and natural human biases while maintaining common sense

Explanation

Kurbalija argues that while illegal biases that insult human dignity must be addressed with urgency, there has been a dangerous obsession with cleaning all biases from AI systems. He contends that humans are naturally ‘biased machines’ influenced by culture, age, and many other identity aspects, and that cleaning biases in AI systems could be, at least, impossible and, at worst, dangerous.

Evidence

Personal example of each of us being influenced by our culture, age and many other specificities.

Major discussion point

Data Bias and Inclusivity

Topics

Human rights | Legal and regulatory

Disagreed with

Disagreed on

Approach to AI Bias Management

G

Guilherme Canela

Speech speed

145 words per minute

Speech length

1156 words

Speech time

477 seconds

Innovation should mean leaving no one behind in the conversation

Explanation

Canela argues that true innovation in AI governance should be inclusive and ensure that everyone is part of the conversation, not just the 1%. He draws a parallel to Eleanor Roosevelt and the Universal Declaration of Human Rights as an example of groundbreaking innovation that brought people together in an inclusive way.

Evidence

Eleanor Roosevelt holding the Universal Declaration of Human Rights as example of real innovation that was groundbreaking and inclusive with 33 articles

Major discussion point

Human Rights and Ethical Considerations

Topics

Human rights | Development

A

Audience

Speech speed

140 words per minute

Speech length

1152 words

Speech time

493 seconds

Data gathering inherently contains bias from experts collecting it, need inclusive approaches

Explanation

An audience member from Nigeria argues that data gathering for AI is inherently flawed because it’s done by experts who each have their own biases, making the resulting AI systems biased from the start. They emphasize the need for inclusive approaches that bring all stakeholders into the data gathering process to ensure AI serves all communities.

Evidence

Every individual person has their own bias, so whatever data is gathered is as inherently flawed as the bias of the person gathering it

Major discussion point

Data Bias and Inclusivity

Topics

Development | Human rights

Social offset mechanisms could help organizations demonstrate responsibility for AI risks

Explanation

An audience member suggests that organizations using AI could document existing and foreseeable risks and demonstrate how they offset those risks in an objective way with independent oversight. They propose this as a creative way to make AI governance tangible and explainable, drawing parallels to carbon offset mechanisms.

Evidence

B Corp standard for environmental social governance (ESG), carbon offset mechanisms, hypothetical example of social media platforms demonstrating social offset 15 years ago

Major discussion point

Environmental and Social Justice

Topics

Legal and regulatory | Human rights

Need to address environmental impacts and extractivism related to AI infrastructure development

Explanation

An audience member from Mexico challenges the underestimation of AI risks and specifically questions how companies like Meta plan to self-regulate when they haven’t conducted environmental or human rights assessments for hyperscale data centers. They point to examples of data centers being moved from places like the Netherlands to Global South countries without proper consultation.

Evidence

Hyperscale data centers in Netherlands facing public pressure, then moved to Global South countries or Spain, issues with extractivism, hydric crisis, and pollution affecting communities without consultation

Major discussion point

Environmental and Social Justice

Topics

Development | Human rights | Sustainable development

O

Online moderator

Speech speed

145 words per minute

Speech length

178 words

Speech time

73 seconds

Questions about the Council of Europe framework convention on AI as the first international legally binding treaty

Explanation

The online moderator relayed a question from Grace Thompson about panelists’ views on the Council of Europe framework convention on AI, human rights, democracy and the rule of law. This represents the first international treaty and legally binding document to safeguard people in AI system development and oversight.

Evidence

42 signatories to date including non-European states, advocacy from Center for AI and Digital Policy for endorsement

Major discussion point

Global Cooperation and Standardization

Topics

Legal and regulatory | Human rights

Finance and military sectors are biggest AI spenders but lack transparency about successes and failures

Explanation

The online moderator conveyed Michael Nelson’s observation that finance and military sectors spend the most money on AI development but there is very little public knowledge about their successes and failures. This raises concerns about transparency and accountability in these critical sectors.

Evidence

Finance sector and military are the two sectors spending the most money on AI

Major discussion point

Current State of AI Governance Landscape

Topics

Economic | Legal and regulatory

Agreements

Agreement points

False dichotomy between innovation and risk management – both must go hand in hand

False dichotomy between innovation and risk management – both must go hand in hand

False dichotomy between innovation and risk management – both must go hand in hand

Both speakers agree that creating a division between focusing on innovation versus managing risks is counterproductive. They argue that both aspects must be addressed simultaneously rather than treating them as opposing priorities.

Legal and regulatory | Economic

Need for meaningful multi-stakeholder participation in AI governance

Convening power of IGF is crucial for bringing diverse voices together in AI governance discussions

Need for meaningful multi-stakeholder input in AI governance creation, not just participation as a matter of form

Importance of bringing voices from the global south and underrepresented communities to governance dialogues

All three speakers emphasize the critical importance of inclusive, meaningful participation from diverse stakeholders in AI governance processes, with particular attention to ensuring voices from the Global South and underrepresented communities are heard.

Legal and regulatory | Development

Importance of transparency and explainability in AI systems

Risk assessment processes should be objective, transparent, and auditable similar to GDPR accountability structures

Transparency and explainability crucial when bias affects decision-making systems

Need for sufficient explainability in AI decisions that impact human lives and livelihoods

Speakers agree that AI systems, particularly those affecting human lives and decision-making, must be transparent and explainable, with objective and auditable processes for risk assessment and accountability.

Legal and regulatory | Human rights

AI governance must be contextually relevant and locally adapted

Need for sector-specific policy interventions that are technically informed and locally relevant

International coordination needed to set common baseline while respecting local contexts

Need for joint standardization, particularly around AI weights sharing to avoid platform lock-in

Speakers agree that while some standardization and coordination is needed, AI governance must be adapted to local contexts, sectors, and specific needs rather than applying one-size-fits-all solutions.

Legal and regulatory | Development

Similar viewpoints

Both speakers advocate for proactive governance mechanisms that can assess and address AI impacts from the early stages of development, using adaptive approaches like regulatory sandboxes to enable responsible innovation while managing risks.

Need for AI impact assessments and audits to understand societal impacts from the beginning

Should focus on adaptive governance models including regulatory sandboxes for responsible innovation

Legal and regulatory | Economic

Both speakers draw lessons from internet governance history, warning against concentration of power and emphasizing the need for distributed, multi-stakeholder approaches to prevent repeating past mistakes of centralization.

Risk of knowledge centralization and monopolization similar to early internet development

Multi-stakeholder model should be strengthened through IGF and other mechanisms for holistic input

Economic | Development

Both speakers emphasize that human rights principles should be the foundation of AI governance, with international coordination needed to establish common baselines while allowing for local adaptation and ensuring technology serves to empower rather than harm individuals.

AI governance must be anchored in human rights and ensure technology empowers individuals

International coordination needed to set common baseline while respecting local contexts

Human rights | Legal and regulatory

Unexpected consensus

Existing legal frameworks may be sufficient with adaptation rather than new AI-specific regulation

Existing laws and frameworks already address many AI-related harms, need to assess fitness for purpose

Transparency and explainability crucial when bias affects decision-making systems

Despite representing different sectors (private sector vs. civil society), both speakers acknowledge that many existing legal frameworks may be applicable to AI governance challenges, though they may need adaptation. This consensus is unexpected given typical tensions between industry and civil society on regulatory approaches.

Legal and regulatory | Human rights

Acknowledgment of natural human bias while focusing on harmful biases

Need to distinguish between illegal biases and natural human biases while maintaining common sense

Transparency and explainability crucial when bias affects decision-making systems

Both speakers, despite different backgrounds, agree that not all bias is problematic and that efforts should focus on addressing harmful or illegal biases rather than attempting to eliminate all bias. This nuanced view is unexpected in AI governance discussions that often call for complete bias elimination.

Human rights | Legal and regulatory

Common sense and historical legal principles should guide AI governance

Common sense principles from historical legal frameworks like Hammurabi’s code should guide AI responsibility

Risk assessment processes should be objective, transparent, and auditable similar to GDPR accountability structures

Unexpectedly, both the academic/diplomatic representative and the private sector representative agree that AI governance should build on established legal principles and common sense approaches rather than creating entirely new frameworks. This suggests convergence on evolutionary rather than revolutionary regulatory approaches.

Legal and regulatory | Human rights

Overall assessment

Summary

The speakers demonstrated significant consensus on key principles including the need for multi-stakeholder participation, transparency and explainability, contextual adaptation of governance frameworks, and the integration of innovation with risk management. There was also agreement on building upon existing legal frameworks rather than creating entirely new regulatory structures.

Consensus level

High level of consensus on fundamental principles with constructive disagreement on implementation details. This suggests a mature discussion where stakeholders from different sectors (private, public, civil society, and academic) have moved beyond basic positions to focus on practical governance solutions. The implications are positive for AI governance development, as this level of agreement on core principles provides a strong foundation for collaborative policy development while allowing for contextual adaptation and sector-specific approaches.

Differences

Different viewpoints

Innovation vs. Risk Management Balance – False Dichotomy Debate

False dichotomy between innovation and risk management – both must go hand in hand

False dichotomy between innovation and risk management – both must go hand in hand

While both speakers agree it’s a false dichotomy, Claybaugh argues the conversation has been overweighted toward risk and safety concerns and suggests broadening to include opportunity and innovation. Kakkar counters that regulation and governance are not bad words and warns against repeating past mistakes of not developing governance mechanisms from the beginning.

Legal and regulatory | Economic

Existing Legal Frameworks vs. New AI-Specific Regulation

Existing laws and frameworks already address many AI-related harms, need to assess fitness for purpose

Need for AI impact assessments and audits to understand societal impacts from the beginning

Claybaugh advocates for using existing legal frameworks that pre-date ChatGPT and assessing their fitness for purpose rather than creating new AI-specific regulation. Kakkar argues for implementing new AI impact assessments and audits from the beginning to understand societal impacts that may not be covered by existing frameworks.

Legal and regulatory | Human rights

Approach to AI Bias Management

Need to distinguish between illegal biases and natural human biases while maintaining common sense

Transparency and explainability are crucial when bias affects decision-making systems

Kurbalija argues against the ‘obsession’ with cleaning all biases from AI systems, distinguishing between illegal biases that should be addressed and natural human biases that are inevitable. Kakkar emphasizes the unique challenges AI systems present regarding explainability and the need for transparency when biased decision-making impacts people.

Human rights | Legal and regulatory

Knowledge vs. Data Framework Priority

AI is about knowledge, not just data – need to shift governance language back to knowledge

Knowledge should have attribution and belong to communities rather than disappearing into AI systems

Kurbalija uniquely emphasizes shifting AI governance discussions from data-centric to knowledge-centric language, arguing that knowledge should have attribution and belong to communities. Other panelists focus more on data governance, bias, and regulatory frameworks without specifically addressing this knowledge vs. data distinction.

Legal and regulatory | Development

Unexpected differences

Self-regulation vs. External Oversight Effectiveness

Risk assessment processes should be objective, transparent, and auditable, similar to GDPR accountability structures

Need for AI impact assessments and audits to understand societal impacts from the beginning

Need to address environmental impacts and extractivism related to AI infrastructure development

Unexpected tension emerged when Claybaugh discussed Meta’s self-regulatory efforts while the audience member from Mexico directly challenged Meta’s track record on environmental and human rights assessments for data centers. This created an unexpected confrontation about corporate accountability that Claybaugh couldn’t fully address.

Development | Human rights | Sustainable development

Urgency of AI Governance Implementation

We are at an inflection point with many frameworks but questions remain about implementation and effectiveness

False dichotomy between innovation and risk management – both must go hand in hand

While both speakers acknowledged the current state of AI governance, an unexpected disagreement emerged about timing and urgency. Claybaugh suggested taking stock and potentially slowing down (referencing the EU’s reconsideration), while Kakkar emphasised the urgency of implementing governance mechanisms immediately to avoid path dependencies.

Legal and regulatory | Economic

Overall assessment

Summary

The main areas of disagreement centered around the balance between innovation and regulation, the adequacy of existing legal frameworks versus need for new AI-specific governance, approaches to bias management, and the priority of knowledge versus data frameworks in AI governance discussions.

Disagreement level

Moderate disagreement with significant implications. While speakers shared common goals of inclusive, responsible AI governance, their different approaches could lead to fragmented implementation strategies. The disagreements reflect broader tensions in the AI governance community between industry self-regulation and external oversight, between leveraging existing frameworks and creating new ones, and between global standardization and local adaptation. These disagreements are constructive and represent legitimate different perspectives rather than fundamental conflicts, but they highlight the complexity of achieving coordinated AI governance across different stakeholders and regions.

Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for proactive governance mechanisms that can assess and address AI impacts from the early stages of development, using adaptive approaches like regulatory sandboxes to enable responsible innovation while managing risks.

Need for AI impact assessments and audits to understand societal impacts from the beginning

Should focus on adaptive governance models including regulatory sandboxes for responsible innovation

Legal and regulatory | Economic

Both speakers draw lessons from internet governance history, warning against concentration of power and emphasizing the need for distributed, multi-stakeholder approaches to prevent repeating past mistakes of centralization.

Risk of knowledge centralization and monopolization similar to early internet development

Multi-stakeholder model should be strengthened through IGF and other mechanisms for holistic input

Economic | Development

Both speakers emphasize that human rights principles should be the foundation of AI governance, with international coordination needed to establish common baselines while allowing for local adaptation and ensuring technology serves to empower rather than harm individuals.

AI governance must be anchored in human rights and ensure technology empowers individuals

International coordination needed to set common baseline while respecting local contexts

Human rights | Legal and regulatory

Takeaways

Key takeaways

AI governance is at a critical inflection point with numerous frameworks established but implementation challenges remaining

Multi-stakeholder participation must be meaningful and inclusive, particularly bringing voices from the global south and underrepresented communities

The innovation vs. risk management debate represents a false dichotomy – both elements must be addressed simultaneously through adaptive governance models

AI governance should shift focus from data to knowledge, with proper attribution and community ownership of knowledge being essential

Human rights must anchor all AI governance efforts, with explainability and human-in-the-loop mechanisms required for high-risk applications

Existing legal frameworks can address many AI-related harms but need assessment for fitness-for-purpose in the AI context

Global cooperation requires standardization (particularly around AI weights sharing) while respecting local contexts and priorities

Bias in AI systems is inevitable but must be distinguished between natural human bias and illegal/harmful bias, with transparency being key

Environmental and social justice considerations, including extractivism and energy consumption, must be integrated into AI governance frameworks

The IGF’s convening power is crucial for bringing diverse stakeholders together to advance AI governance discussions

Resolutions and action items

South Africa’s G20 presidency will develop a toolkit to reduce AI-related inequalities from a global south perspective

Continue strengthening the IGF as a feedback mechanism into CSTD, WSIS forum, and other multilateral processes

Implement AI impact assessments and audits to understand societal impacts from early stages of development

Develop regulatory sandboxes and adaptive policy tools for context-specific AI governance

Focus on joint standardization efforts, particularly around AI weights sharing standards

Align national AI policies with regional frameworks like the African Union AI strategy

Establish human-in-the-loop mechanisms with clear intervention thresholds for high-risk AI domains

Unresolved issues

How to achieve global consensus on what constitutes AI risks and how to measure them scientifically

How to balance innovation incentives with regulatory requirements without creating a race to the bottom

How to ensure meaningful participation from global majority countries in AI governance processes given resource constraints

How to address the environmental impact and energy consumption of AI systems in governance frameworks

How to handle responsibility and liability in open source AI models where usage cannot be predicted or controlled

How to prevent knowledge centralization and monopolization while enabling AI development

How to create universal frameworks while respecting local contexts and priorities

How to address the concentration of AI development in certain regions and democratize access to AI technology

How to implement effective transparency and explainability requirements for complex AI systems

Suggested compromises

Use risk assessment frameworks similar to GDPR that are objective, transparent, and auditable rather than prescriptive technology regulation

Implement light-touch regulatory mechanisms like sandboxes and impact assessments to understand harms without stifling innovation

Focus on sector-specific governance approaches rather than one-size-fits-all AI regulation

Distinguish between different types of AI systems (open source vs. closed, high-risk vs. low-risk) for differentiated governance approaches

Build on existing legal frameworks and assess their fitness for purpose rather than creating entirely new regulatory structures

Establish common baseline standards through international cooperation while allowing for local adaptation and priorities

Balance self-regulation by companies with external oversight and multi-stakeholder input

Address both macro-level foresight and micro-level precision in AI governance through complementary approaches

Thought provoking comments

We don’t necessarily agree on what the risks are and whether there are risks and how we quantify them… Can we talk about opportunity? Can we talk about enabling innovation? Can we broaden this conversation about what we’re talking about and who we’re talking with?

Speaker

Melinda Claybaugh

Reason

This comment challenged the dominant risk-focused narrative in AI governance discussions and introduced the concept of a false dichotomy between innovation and safety. It was provocative because it suggested the AI governance community might be overemphasizing risks at the expense of opportunities.

Impact

This comment became a central theme throughout the discussion, with Jhalak directly addressing it as a ‘false dichotomy’ and arguing that innovation and governance must go hand in hand. It shifted the conversation from purely technical governance issues to fundamental questions about how we frame AI development.

We have to change our governance language. If you read WSIS documents, both Tunis and Geneva, the key term was knowledge, not data… Now, somehow, in 20 years’ time, knowledge is completely cleaned. You don’t have it in GDC, you don’t have it in the WSIS documents, you have only data. And AI is about knowledge.

Speaker

Jovan Kurbalija

Reason

This observation was intellectually provocative because it identified a fundamental shift in how we conceptualize information governance – from knowledge (which implies human understanding and context) to data (which is more technical and abstract). It connected current AI debates to broader historical patterns in digital governance.

Impact

This comment introduced a new analytical framework that influenced subsequent discussions about attribution, ownership, and the democratization of AI. It led to deeper conversations about who owns knowledge embedded in AI systems and how to preserve local and contextual knowledge.

Very often, I’m hearing conversations about, you know, we’ve talked about risk. Let’s focus on innovation now. I think it’s creating a false sense of dichotomy. I think they have to go hand in hand… We need to be carrying out AI impact assessments from a socio-technical perspective so that we really understand impacts on society and individuals.

Speaker

Jhalak Kakkar

Reason

This comment directly challenged Melinda’s framing and provided a sophisticated counter-argument that governance and innovation are complementary rather than competing priorities. It introduced the concept of socio-technical impact assessments as a practical solution.

Impact

This response elevated the discussion from a simple either/or debate to a more nuanced conversation about how to implement governance mechanisms that support rather than hinder innovation. It led to practical discussions about sandboxes, audits, and light-touch regulatory approaches.

We advocate for context-aware regulatory innovation… There is no one-size-fits-all when it comes to AI. We need peripheral foundational approaches that are grounded in equity and don’t want AI to replace humans, but we want AI to work with humans.

Speaker

Mlindi Mashologu

Reason

This comment introduced the crucial concept of ‘context-aware regulatory innovation’ and emphasized the Global South perspective on AI governance. It challenged universalist approaches to AI governance while maintaining focus on equity and human-centered development.

Impact

This perspective influenced the entire panel’s discussion about local relevance versus global coordination, leading to deeper conversations about how to avoid AI governance fragmentation while respecting local contexts and priorities.

We should keep in mind that we are bias machines. I am biased. My culture, my age, my hormones, whatever, are defining what I’m saying now… This obsession with cleaning bias was very dangerous. Yes, illegal biases, biases that threaten communities, definitely. But I would say we have to bring more common sense into this.

Speaker

Jovan Kurbalija

Reason

This was a controversial and thought-provoking comment that challenged the prevailing orthodoxy about bias elimination in AI systems. It introduced nuance by distinguishing between harmful biases and natural human perspectives, advocating for a more realistic approach to bias in AI.

Impact

This comment sparked immediate reactions from other panelists and audience members, leading to a more sophisticated discussion about what types of bias are problematic versus natural, and how to handle bias in AI systems without losing valuable diversity of perspectives.

How do we really, truly democratize access to AI? We need to enhance capacity of countries to create local AI ecosystems so that we don’t have a concentration of infrastructure and technology in certain regions… How do we facilitate access to technology and create AI commons?

Speaker

Jhalak Kakkar

Reason

This comment shifted the focus from governance frameworks to fundamental questions of global equity and access. It connected AI governance to broader development and justice issues, introducing concepts like ‘AI commons’ and technology transfer.

Impact

This perspective influenced the discussion toward more structural questions about global AI inequality and led to conversations about how governance frameworks should address not just safety and innovation, but also equitable access and development.

Overall assessment

These key comments fundamentally shaped the discussion by introducing several important tensions and frameworks: the innovation-versus-governance debate, the knowledge-versus-data paradigm shift, the global-versus-local governance challenge, and the bias-elimination-versus-natural-diversity question. Rather than settling these tensions, the comments elevated the conversation to a more sophisticated level where participants grappled with complex trade-offs and nuanced positions. The discussion evolved from initial position statements to a more dynamic exchange where panelists directly engaged with each other’s frameworks, ultimately producing a richer understanding of AI governance challenges that goes beyond simple regulatory approaches to encompass questions of equity, access, knowledge ownership, and cultural context.

Follow-up questions

How can existing legal frameworks be adapted to be fit for purpose with AI technology, particularly in areas like antitrust/competition law, copyright law, and data protection?

Speaker

Jhalak Kakkar

Explanation

This addresses the gap between current regulations and the new realities that AI brings, such as network effects, data advantages, and fair use exceptions being leveraged by large companies in ways not originally intended.

How can we develop mechanisms to share AI model weights and preserve knowledge attribution while enabling interoperability between AI systems?

Speaker

Jovan Kurbalija

Explanation

This is crucial for preventing knowledge monopolization and ensuring that knowledge generated by communities can be preserved and attributed properly, while avoiding the platform lock-in problems seen with social media.

How can we implement AI impact assessments and auditing mechanisms in light-touch ways that don’t burden innovation but help us understand societal impacts?

Speaker

Jhalak Kakkar

Explanation

This addresses the need to understand AI’s impacts on society and individuals before path-dependencies are created, allowing for proactive rather than reactive governance.

How can we ensure meaningful participation from the global majority in AI governance processes, not just token representation?

Speaker

Jhalak Kakkar

Explanation

This is essential because AI will function and impact differently in different contexts, requiring diverse perspectives in governance frameworks rather than one-size-fits-all approaches.

How can we develop context-aware regulatory frameworks that address sector-specific AI applications while maintaining coherent governance principles?

Speaker

Mlindi Mashologu

Explanation

Different sectors (financial services, agriculture, healthcare) require different regulatory approaches, but there’s a need to understand how to balance specificity with consistency.

How can we establish clear responsibility and liability frameworks for AI systems, particularly in cases where AI makes errors or causes harm?

Speaker

Jovan Kurbalija

Explanation

Using the example of Diplo’s AI potentially misreporting statements, this highlights the need for clear accountability mechanisms similar to historical legal principles like those in Hammurabi’s code.

How can we create universal frameworks for AI governance while respecting local contexts and avoiding fragmentation?

Speaker

Kunle Olorundare

Explanation

This addresses the tension between having consistent global standards and accommodating different regional needs and priorities in AI governance.

How can we ensure inclusive data collection processes that account for multiple stakeholder perspectives and reduce inherent biases in AI training data?

Speaker

Kunle Olorundare

Explanation

This is fundamental to creating AI systems that work for everyone, as biased data collection by experts can perpetuate existing inequalities and exclusions.

How can we address the environmental and social justice impacts of AI infrastructure, particularly regarding data center placement and resource extraction?

Speaker

Anna from R3D

Explanation

This highlights the need to consider the broader impacts of AI development, including environmental costs and how they disproportionately affect communities in the Global South.

What are the implications of AI development and deployment in high-stakes sectors like finance and military, and how should these be governed?

Speaker

Michael Nelson (online)

Explanation

These sectors are investing heavily in AI but with little transparency about successes and failures, raising questions about oversight and accountability in critical applications.

How can we establish international coordination mechanisms that set common baselines for AI governance while allowing for innovation?

Speaker

Jhalak Kakkar

Explanation

This addresses the need to prevent a ‘race to the bottom’ in AI governance standards while maintaining space for technological advancement and regional adaptation.

How can we democratize access to AI technology and create AI commons to prevent concentration of AI capabilities in certain regions?

Speaker

Jhalak Kakkar

Explanation

This relates to ensuring equitable access to AI benefits and preventing the same concentration patterns seen in previous technology developments.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #462 Bridging the Compute Divide a Global Alliance for AI

WS #462 Bridging the Compute Divide a Global Alliance for AI

Session at a glance

Summary

This panel discussion focused on “Bridging the Compute Divide” and exploring the need for a global alliance to ensure equitable access to AI computational resources. The conversation was moderated by Fabio Steibel from Brazil’s ITS Rio Institute, who proposed creating a “GAVI for AI” – modeled after the successful Global Alliance for Vaccines and Immunization that has facilitated vaccine access worldwide since 2000.


The panelists included Jason Slater from UNIDO, Elena Estavillo Flores from Centro AI para la Sociedad del Futuro, Ivy Lau-Schindewolf from OpenAI, and Alisson O’Beirne from the Canadian government. They identified several key barriers to equitable compute access, including the concentration of computational power in roughly 30 nations (primarily the US and China), infrastructure gaps creating “compute deserts,” skills shortages, and the compounding nature of digital divides that leave regions further behind over time.


The discussion revealed that the challenge extends beyond inequitable distribution to an overall supply-demand gap affecting even developed nations. Panelists emphasized that solutions require multi-stakeholder collaboration involving governments, private sector, academia, and civil society. They highlighted successful examples like UNIDO’s AI for Manufacturing alliance and OpenAI’s Stargate infrastructure project, which demonstrate how technical, financial, and political partners can work together effectively.


Key lessons from GAVI included the importance of inclusive governance models, corrective mechanisms for historical inequalities, and sustainable financing structures. The panelists stressed that addressing compute access must be coupled with investments in local talent, skills development, and ensuring AI tools are designed responsively for diverse communities. The discussion concluded with calls for multiple collaborative alliances tailored to different communities and contexts, emphasizing that collective action requires genuine listening, compromise, and openness to different perspectives and needs.


Keypoints

## Major Discussion Points:


– **Global Compute Divide and Access Inequality**: The panel highlighted the stark disparity in computational power access between the Global North and South, with Brazil having only 1% of global data centers and 0.2% of computational power. This creates barriers for AI development and perpetuates existing digital divides.


– **Need for Multi-stakeholder Collaboration**: Drawing lessons from GAVI (Global Alliance for Vaccines and Immunization), panelists emphasized the necessity of bringing together governments, private sector, academia, and civil society to address compute access challenges through coordinated purchasing power and resource sharing.


– **Infrastructure vs. Benefits Access**: The discussion explored whether countries need local compute infrastructure or if remote access to AI benefits could suffice. Examples included partnerships between countries for supercomputing resources and making AI tools accessible through platforms like WhatsApp in low-connectivity areas.


– **Sustainable Local Capacity Building**: Panelists stressed the importance of investing in people, local talent, and startup ecosystems rather than just hardware, highlighting how Global South innovation often emerges from creative solutions with limited resources.


– **Governance and Equitable Distribution**: The conversation addressed how to ensure fair distribution of compute resources through inclusive governance models that consider not just technical efficiency but social fairness, involving local institutions in decision-making processes.


## Overall Purpose:


The discussion aimed to explore the feasibility and framework for creating global alliances to address the computational power divide in AI development, specifically focusing on how to ensure equitable access for Global South nations. The panel sought to identify barriers, share lessons from successful global initiatives like GAVI, and propose collaborative solutions for bridging the compute gap.


## Overall Tone:


The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s ideas rather than debating opposing viewpoints. The tone was solution-oriented and pragmatic, moving from problem identification to concrete examples and actionable proposals. There was a sense of urgency about addressing inequities while remaining optimistic about the potential for international cooperation. The conversation became increasingly focused on practical implementation strategies as it progressed, with panelists sharing specific initiatives and calling for concrete collaborative action.


Speakers

– **Fabro Steibel** – Executive Director of ITS Real Institute for Technology and Society (civil society organization), Panel Moderator


– **Jason Slater** – Chief AI Digital Innovation Officer at UNIDO (United Nations Industrial Development Organization)


– **Elena Estavillo Flores** – Founder and leader of Centro AI para la Sociedad del Futuro (think tank), Former telecommunications regulator in Mexico, Economist


– **Ivy Lau-Schindewolf** – International policy and partnerships at OpenAI, Global affairs team member coordinating work in growth markets (Africa, Latin America, APEC) and multilateral engagements


– **Alisson O’Beirne** – Director of International Telecommunications and Internet Policy, Department of Innovation, Science and Economic Development, Government of Canada


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

# Bridging the Compute Divide: A Comprehensive Panel Discussion Report


## Executive Summary


This panel discussion, moderated by Fabro Steibel from Brazil’s ITS Rio Institute, explored the critical challenge of creating equitable access to artificial intelligence computational resources through global collaboration. The conversation centered on the concept of establishing a “GAVI for AI” – a global alliance modeled after the successful Global Alliance for Vaccines and Immunization.


The distinguished panel brought together perspectives from multilateral organizations, government, private sector, and civil society. Jason Slater, Chief AI Digital Innovation Officer at UNIDO, represented the industrial development perspective; Elena Estavillo Flores provided insights from her experience as both a former telecommunications regulator in Mexico and current think tank leader; Ivy Lau-Schindewolf offered the private sector perspective from OpenAI’s global affairs team; and Alisson O’Beirne contributed the Canadian government’s policy viewpoint on international telecommunications and internet governance.


The discussion revealed that the compute divide represents both a supply shortage affecting all nations and an access inequality that disproportionately impacts the Global South. Panelists identified multiple interconnected barriers while emphasizing that solutions require unprecedented multi-stakeholder collaboration involving governments, private companies, academic institutions, and civil society organizations.


## The Global Compute Divide: Scope and Scale


### Quantifying the Inequality


Fabro Steibel opened the discussion with stark statistics illustrating the magnitude of the global compute divide. According to “EAA numbers released this week,” Brazil possesses only 1% of the world’s data centers (representing half of all Latin America) and a mere 0.2% of worldwide computational power. This disparity demonstrates how even major economies in the Global South face significant barriers to AI development and deployment.


Jason Slater expanded on this theme, noting that nearly 3 billion people remain unconnected globally, with Africa showing only a 25% adoption rate for AI and digital tools. He identified the concentration of computational power in approximately 30 nations, primarily the United States and China, creating what he termed “compute deserts” – regions with minimal connectivity and substantial skills gaps.


### The Compounding Nature of Digital Divides


Elena Estavillo Flores provided crucial insight into how the compute divide creates self-reinforcing cycles of disadvantage. She explained that the barriers are “not only complex, but they’re also compounding, they’re self-perpetuating.” This creates a situation where regions already lacking computational resources fall further behind as global demand increases and investment flows to areas that already possess compute capacity.


Alisson O’Beirne reinforced this analysis, noting that “as folks are left behind and as there’s a lack in compute capacity, those that are already behind the game are going to be left further and further behind.” This temporal dimension suggests that without proactive intervention, market forces alone will continue to exacerbate existing disparities.


### Universal Supply Constraints


Ivy Lau-Schindewolf introduced a crucial reframing by highlighting that “the problem isn’t just inequitable access. The problem is everyone needs more.” She noted that ChatGPT reached 100 million users in one month and now has 500 million weekly active users, illustrating the explosive global demand for AI capabilities that exceeds current supply capacity.


This universal supply constraint means that solutions cannot rely solely on redistributing existing computational resources from developed to developing nations. Instead, addressing the compute divide requires both expanding overall global capacity and ensuring more equitable distribution of resources.


## Barriers to Equitable Access


### Infrastructure and Investment Challenges


Elena Estavillo Flores highlighted particular challenges facing Latin America, where “there’s not enough private investment” and “governments don’t have enough resources for the investments that are needed.” This creates a funding gap where neither private markets nor public resources alone can provide the massive capital investments required for modern AI infrastructure.


Geographic cost differences and market concentration create additional barriers. The concentration of computational resources in specific regions leads to cost advantages that become self-perpetuating, making it increasingly difficult for emerging economies to compete or develop local alternatives.


### Skills and Capacity Gaps


Beyond physical infrastructure, Jason Slater emphasized that compute deserts are characterized not only by lack of connectivity but also by “significant skills gaps.” These capacity constraints mean that even when computational resources become available, many regions lack the technical expertise necessary to utilize them effectively.


Ivy Lau-Schindewolf reinforced this point, noting that “cultivating vibrant startup ecosystems and investing in people through education programmes are essential beyond just hardware infrastructure.”


## Learning from GAVI and Existing Models


### Multi-Stakeholder Collaboration


Jason Slater highlighted how successful global alliances require bringing together diverse stakeholders as trusted conveners. He pointed to UNIDO’s AI for Manufacturing Global Alliance, which includes 140 members from over 40 countries, as an example of effective multi-stakeholder collaboration.


Alisson O’Beirne emphasized how “critical mass through collective action gives countries greater negotiating power than individual efforts.” This insight suggests that coordinated demand from multiple countries can influence market dynamics and pricing in ways that individual national efforts cannot achieve.


### Concrete Examples of Collaboration


Jason Slater presented UNIDO’s Ethiopia coffee project as a practical example of multi-stakeholder collaboration. The project brings together Italy, Ethiopia, Google, NGS, and the International Coffee Organization to address the EU deforestation directive while building local AI capacity.


Ivy Lau-Schindewolf highlighted OpenAI’s Stargate infrastructure project as an example of private sector leadership mobilizing diverse partners for large-scale compute infrastructure development. She also mentioned OpenAI’s Academy training, which has reached 1.4 million people globally, and partnerships with platforms like WhatsApp to provide AI access in low-connectivity environments.


Alisson O’Beirne announced Canada’s collaboration with the UK’s Foreign Commonwealth Development Office through a $10 million commitment from Canada’s IDRC to develop an “equal compute network.”


## Alternative Approaches and Innovation


### Remote Access vs. Local Infrastructure


The discussion explored whether countries require local computational infrastructure or whether remote access could suffice for many applications. Ivy Lau-Schindewolf argued that creative solutions, such as integrating AI capabilities through widely-used platforms, could provide immediate benefits while longer-term infrastructure development proceeds.


However, other panelists emphasized the importance of local infrastructure for enabling indigenous innovation and ensuring technological sovereignty. Elena Estavillo Flores noted that Global South regions have “managed to develop ingenuity and contextual intelligence to find solutions with very limited resources,” suggesting that “if this ingenuity is met with more infrastructure, then there is an opportunity.”


### Building on Local Innovation


Fabro Steibel mentioned that Brazil’s recently released AI national plan includes examples like “Favela GPT and Amazon GPT,” demonstrating local innovation that could be amplified through better infrastructure access.


Alisson O’Beirne expanded the discussion to include “equitability of design and equitability of use,” noting that “if we don’t have AI tools that are designed responsibly and that respond to the needs of local communities, access is not going to be sufficient.”


## Governance and Implementation Challenges


### Balancing Efficiency and Equity


Elena Estavillo Flores emphasized the need for “inclusive governance models with meaningful civil society participation” to ensure that fairness considerations are not subordinated to pure technical optimization. She highlighted that “credible governance models require trust-building mechanisms and fair benefit-sharing to maintain long-term participation and investment.”


### The Importance of Listening and Compromise


Alisson O’Beirne provided perhaps the most crucial insight for implementation, emphasizing that successful collaboration requires “a spirit of listening and openness and a spirit of compromise.” This recognition that technical and financial solutions alone are insufficient without genuine commitment to understanding diverse perspectives provides a foundation for moving from discussion to implementation.


## Concrete Next Steps and Commitments


### Immediate Opportunities


Jason Slater issued a direct call for participants to join UNIDO’s existing AI for Manufacturing Global Alliance, providing an immediate platform for multi-stakeholder collaboration. He also mentioned that the Global Digital Compact provides “a clear framework for action through multi-stakeholder approaches linking digital economy and AI objectives.”


### Committed Resources


Several concrete commitments emerged from the discussion:


– Canada’s IDRC and the UK’s Foreign Commonwealth Development Office have committed $10 million to develop an “equal compute network”


– OpenAI committed to expanding their “OpenAI for countries” programme and Academy training


– UNIDO committed to continuing development of AI lighthouse solutions beyond the Ethiopia coffee project


### Scaling Successful Models


Participants agreed to explore replicating successful consortium models for international compute infrastructure projects, indicating willingness to adapt proven approaches for broader international cooperation.


## Conclusion


The panel discussion revealed both the complexity of addressing global compute access challenges and the potential for meaningful international cooperation. The strong consensus on the need for multi-stakeholder collaboration, combined with concrete examples of successful initiatives and committed resources, suggests viable pathways for implementing global alliance models.


The most significant insight may be the recognition that successful collaboration requires not just technical and financial solutions, but genuine commitment to listening, compromise, and understanding diverse perspectives and needs. The path forward likely requires multiple collaborative approaches tailored to different communities and contexts, coordinated through existing international frameworks, and supported by a combination of public and private resources.


The urgency of action is clear, given the self-perpetuating nature of compute disadvantages and the rapid pace of AI development. However, the discussion suggests that the foundations for effective global cooperation exist, requiring primarily the political will and institutional commitment necessary to translate shared understanding into coordinated action.


Session transcript

Fabro Steibel: So, hello everyone, welcome to the panel. If you are online, welcome. If you are here in front of us, welcome. This is the panel Bridging the Compute Divide, a global alliance for AI. And if you ask if the global alliance or a global alliance, I think it might be a global alliance and even many global alliances as long as we have this idea of global alliance. We’ll introduce shortly the panel and then I’ll pass the words for three rounds of questions and then we open for your comments. So let me explain why we believe we need a global alliance for AI. My name is Fabio Steibel. I’m Executive Director of ITS Real Institute for Technology and Society and it’s a civil society. Last year on the topic of techno diversity, we came with the problem challenge that compute power will be very limited. And this is a different problem for society of information. Brazil has 1% of all data centers of the world. That’s half of Latin America. According to EAA numbers released this week, Brazil has 0.2% of the computational power. And it’s not to say that Brazil or any other country needs to have its own capacity, its own compute power, but the idea is to access. There is a big challenge for access if you look for… the Global North and the Global South bridge or other bridges you can do. This is why we suggest a global alliance for AI, what we call Gavi for AI. So if you’re not familiar with Gavi, it’s the global alliance for vaccination. In 2000, they started to put countries and foundations together to purchase something that was very limited in the market. Same problem we see here with compute power. Might be energy, might be compute parts, but it’s a limited supply of these elements to buy. So together, they could have three groups working. One, that gets the money and make sure everything is accountable. Two, that makes a technical definition of what they should buy. And three, a group that decides on how to share, how to distribute whatever they’re doing. Today, they are responsible for more than a third of the vaccines purchased in the world yearly. They were able to breach the limited supply and were able to increase access to vaccines. Is it the same for compute power? Some yes, some no. What are the lessons to be learned? What are the different approaches we have? So this is the intro. I will pass the word to the speakers. We have very different stakeholders here, which is the best approach to see the problem from different ways. So Jason, I’ll start with you. Jason Slata is the Chief AI Digital Innovation Officer at UNIDO. If you’re not familiar with UNIDO, it’s about industrial development, very important for the topic. Jason.


Jason Slater: Thank you very much for having me here today. Just very briefly, my name is Jason Slater. I’m the Chief AI Innovation and Digital Officer for the United Nations Industrial Development Organization. We’re an organization that’s been around nearly 60 years now. We’re a specialized UN agency with a very specific focus on how can we ensure a sustainable industrial development.


Fabro Steibel: We go for Elena Estavillo Flores. She founded and leads the Centro AI para la Sociedad del Futuro, a think tank that works to build the digital future in an ethical, responsible and inclusive way. Elena, can you hear us?


Elena Estavillo Flores: Yes, yes, I hear you very well. Hello.


Jason Slater: So, please introduce yourself very briefly and then later we go for the two questions.


Elena Estavillo Flores: Introduce myself? Yes, of course. I live in the Centro AI. As you told, this is an independent think tank and we work to foster ethical digital technologies, inclusion, responsibility. And I myself have a long career in regulation, in public policy. I was a regulator for telecommunications in Mexico and I also have taught for many years. I’m an economist.


Fabro Steibel: Thank you very much. I like very much to have economists in the panel. I’ll move now to Ivy. Ivy, Ivy Lau Schindelboff, works in international policy and partnerships at OpenAI. Hi, thank you so much for having me and for moderating and organizing this panel.


Ivy Lau-Schindewolf: My name is Ivy Lau Schindelboff. I am part of the global affairs team at OpenAI, based in our San Francisco headquarter. I wear two hats. One of them is I coordinate our work and growth markets and that means Africa, Latin America and APEC And the other hat I wear is I help lead our multilateral engagements. Pleasure to be here


Fabro Steibel: Thank you very much, Ivy. So now for the fourth participant Alisson O’Beirne is the Director of International Telecommunications and Internet Policy in the Department of Innovation, Science and Economic Development in the Government of Canada Thanks so much. I have a hideously long title. It’s mostly because of our department name. So you did very well there


Alisson O’Beirne: Hi folks, Alisson O’Beirne as as mentioned I’m the Director for International Telecoms and Internet Policy for our little team Within our what is equivalent to the Industry Department in the Government of Canada We have responsibilities for both the ITU and for Internet governance files for Government of Canada And it’s a role that I’ve actually been in for just under a year now So this is my first IGF and I am delighted to be able to experience it Live and in person after hearing about how great this forum is for many years And I had previously spent about five years in our same Industry Department working on AI policy So this is an issue that’s near and dear to my heart for sure


Fabro Steibel: Thank you. And you know Canada was one of the first countries to jump in the AI regulation arena So let’s move to a first question What are the key geopolitical and technical barriers? Preventing equitable access to computational power for AI development and how can International cooperation in particular help to address them. So what the barriers we have to achieve Access Jason, would you like to start?


Jason Slater: Yeah, thank you very much. Yeah, I Guess I a few things from what we see here that In terms of in terms of AI computing power This is really concentrated in in only a few nations right now. I think it’s roughly around 30 Primarily in US and China, etc and the way that I want to tackle this is really by looking at it from UNIDO’s perspective is who are the member states that we are primarily trying to support and where do we see this digital divide or AI computing divide so we can if we look at Africa for example we still see there that you know roughly globally there’s nearly nearly 3 billion still unconnected this is a huge challenge you see in Africa alone that in terms of AI and digital tools the adoptions in the region of 25% so for me one is of course in terms of you thinking about computing power and we have what we call these compute deserts where we have these these zones where there’s just simply no connectivity we also have to see it from how do we ensure adoption how do we also look at it from a skills perspective so there is a huge skills gap that we’re seeing right now and again if I look at Africa in particular but there’s some positive news and I think we’ll come to that later yeah when our fellow panelists have had a chance to speak so there are those are some of the barriers that we are seeing right now now one of the thing this has been framed very clearly by the global digital compact that was that was endorsed last last September in the United Nations General Assembly I was very fortunate to be there at the ceremony that was was celebrating us coming together under the pact for the future and one of my actual roles in the UN is and that’s what I would use as a call for action today is I’m vice chairing the objective number two on an inclusive digital economy which closely links to objective number five on AI so mine is is that we know what those challenges are which challenges to convert themselves into the barriers and how can we then switch that into a much more positive and solution mode and I’ll hold back on that because I think later on I’d like to talk about some of the things not only we’re doing but with our private sector partners and other stakeholders we’re putting in place.


Fabro Steibel: Yeah thank you very much Jason so Finland has a very good experiences on quantum computing and the challenge to share And the topics you bring also bring us very closely to meaningful connectivity Which is a interesting way to see the problem from ten years today ago to today. So I move now to to Ivy


Ivy Lau-Schindewolf: Yeah, I Keep thinking about the way the phrase Equitable access and what the barriers are and I actually want to like take a step back and and think about like just Barriers to access period and And what comes to mind and I think it’s important. It’s important to like Take a moment to think about The gap between supply and demand that everyone faces It’s actually We have reached we have received a lot of outreach from countries with a question of How much demand there really is like do we actually you know, we have this idea that we need GPUs But how much do we really need it? Can you help us quantify? And to be honest, it is not a question. We have thought a lot about until We have seen how fast our models have the demand for our products have grown when we launched at GPT in 22 or 23 now, I can’t remember it seems like a long time ago We thought it was a low-key research preview that nobody would use and pay attention to But then we actually ended up having a hundred million users in one month And now we have five hundred million weekly active users and our CEO Posted on X saying like our GPUs are melting and a lot of people thought oh my gosh, are they actually melting? They know they were not but like we actually just have seen a huge inference demand And that is not we knew training models would involve a lot of GPUs, but even we ourselves have underestimated how much demand there will be on inference. And as we have seen, the cost has come down, and when the cost of serving these models come down, the demand and the use also went up. And when we plotted out in a model, we realized, oh, no, we are not on a trajectory to meet the demands. And so even we ourselves in the U.S. realize we need access to more GPUs. And that’s why we launched Stargate and why we launched OpenAI for countries, and I’ll share more about that later, but I think I want to throw out the framing that, like, yes, we should think about, you know, the divide and whether the access is equitable, but maybe if we can zoom out a little bit, the problem isn’t just inequitable access. The problem is everyone needs more. How do we solve for the gap between supply and demand everywhere?


Fabro Steibel: Thank you. And I like to think like Brazil, we need to prepare for 5G and no connectivity, and you have to take both paths together because we have both audiences. So I will go online now to Elena.


Elena Estavillo Flores: Yes, thank you. I was also reflecting on the question because when we ask if we have enough access, maybe we’re thinking that there is computational power and the problem is how to access it or how to access it equitably. But in this aspect, we have the problem of not enough computational power. No. So it’s a double question about meeting demand and supply and then having the mechanisms. So. that everybody has, well, everybody in the interesting parties has this equitable access. So it’s not just a case of making sure to access to something that exists, but how to produce it. And then we have many barriers that keep just reinforcing themselves. And I’m thinking mostly of Latin America, the case that I know better, because we don’t have enough access to basic infrastructure, to services, to capacity in those services, now for final users. But then this comes over all of this, the companies, the scientists, the academia, the startups that could produce more services, more AI. They have this barrier because there’s not enough compute power so that they can develop the AI that is focused on the region, culture, needs, ways of deciding on how to use AI for which needs, for which problems to solve. So this is just something, it’s like a circle that keeps reinforcing itself. And I see something that it’s something that makes me reflect on whom makes the necessary investments so that there is more compute power in some countries than in others. And it’s very clear that, for example, in the U.S., mainly investment in compute power comes from companies. This is private investment that is oriented to the market. And we don’t see the same dynamics in many of our countries in Latin America. And we expect much of the investment coming from governments. But governments have not enough resources for these huge investments that we would need. And so that’s where this idea of collaborating for producing this infrastructure, this computing infrastructure regionally, well, this is very attractive because individually governments don’t have these resources. But also it comes to questioning us if this will be enough. We need the other side of the investment side from the private sector. So we also have to work on that. And also we can think of collaborative efforts, but bringing together the private sector so that we can gather these sufficient resources coming from only the governments. I don’t think that those will be sufficient.


Fabro Steibel: Elena, thank you very much for your contribution. Alisson, do you want to go next?


Alisson O’Beirne: Yeah, absolutely. I’ll keep my comments relatively brief because I think my colleagues have done a very good job of covering the ground so far. Maybe just to raise a couple of other things. I think we’ve talked a little bit about. And I really want to pick up on what Elena was talking about with regard to the sort of regional and the geographic differences in terms of access, I think is one of the major challenges that we’re facing. Recognizing that particularly for sort of emerging economies for the global south, there are cost issues that can be associated with, you know, developing or establishing compute capacity, that the cost of creating compute capacity is not the same in every region, that there are issues that come to infrastructure, that come to latency that already exists, that have to be addressed in order to be able to establish compute capacity. There’s also, I want to acknowledge, as I think Elena did as well, that there’s a real concentration of the current compute capacity in the hands of a very few providers who have incentives because of the scale of the demand and because of their proximity have incentives to really focus on sort of North American and Western European markets. So with those barriers that I think various folks on the panel have talked about, the one thing that I want to note is that the real challenge that we face is that some of those barriers are not only complex, but they’re also, they’re compounding, they’re self-perpetuating. So as folks are left behind and as there’s a lack in compute capacity, or in some cases lacking infrastructure, or in some cases even lacking demand side, like we talked about, you know, lacking skills or lacking awareness of the necessity of compute power, those that are already behind the game are going to be left further and further behind because as the demand increases in those places that already have compute capacity, we’re going to see just a continuation of response to that instead of a more equitable approach. So I think that’s, as Elena said, a place where governments and particularly where international discussions and international dialogues are critical because you can’t rely sort of on market forces to be able to correct for that need for equity and access.


Fabro Steibel: Thank you Alison and I think you remember that the legacy of the digital divide keeps going and then we have new challenges and the old challenge all together. Exactly. So I’ll go for the second question of the panel. What lessons can be drawn from Global Public Goods Initiatives, such as GAVI that I mentioned before Lessons to design a multilateral framework that ensures fair distribution of compute resources needed for AI development So Jason, we start with you


Jason Slater: Yeah, thank you Hmm, I Think there’s a lot of positives that we can look at that. I mean we when you talk about Multilateral or multi-stakeholder frameworks, it’s we actually launched a similar thing already Near two years ago now. It’s known as AI for manufacturing. It’s a global alliance and What the purpose of this is is that as a you as a as a UN agency that you actually as a trusted advisor if You like to convene and bring together stakeholders. We have around 140 members now from over 40 countries and it’s a complete mix of academia think tanks private sector we’ve got people such as You know Google Huawei imagining those two sitting around the table and what is it that they’re trying to do is to see and identify how we can Leverage AI that can support manufacturing Primarily in those countries that we want to support. So these are you know developing our middle-income countries? So in terms of how Gavi and what it did in bringing together this framework I think there’s a lot a lot of positives that one can see in it I just see from the perspective of what we’re doing around this aim global alliance not to be confused with the title is We want to make it much more solution orientated so when we talk around here now of digital divide of lack of skills of not on Let’s understand where we need to deploy such solutions, etc. So really become much more solution orientated We hear enough about the problems that are going on when the title here. We’re talking about geopolitical issues We’re in the UN and fully understanding that right now, but let’s get on the front foot. Yeah Because we do know that there are gaps. We know for example when we talk about computing power We know where there are challenges right now when it comes to growing a tomato and getting a tomato in Kenya To the market and how 20% of that is lost and here is perfect a use case for a way I can help it So for me bringing all those stakeholders together from the people who need all the way those who provide and Frankly from the UN to have this convening role that actually is a trusted advisor is a phenomenal thing that we can learn Based on the experiences of what happened with Gabi and that scheme before. Thank you Thank you, Jason, and I love that you bring the tomato from Kenya because it moves away from the prompting user Yeah, normal that resembles rural areas Manufacturing intermediaries sustainability and others. I’m going to talk about coffee as well in a few minutes as well.


Fabro Steibel: So Elena, do you want to go next?


Elena Estavillo Flores: Yes, of course about lessons. I think that we have many lessons one of how that an inclusive governance model works very well and in the sense that decisions were not solely shaped by by Some small group of nations of wealthier nations or a small group of corporate interests but that there were many multistakeholders just looking at a meaningful participation for for these different different actors and a civil society also played a very critical role in bringing transparency in monitoring and keeping this equity in Central to this monitoring know and and making sure that decision-making was very aware of of the need to ensure that distribution was fair. So we can learn from that, in that if we get to build this collaboration, that compute distribution will be not only looking at a technical efficiency, but also on socially fairness to distribute and to give access to this computing power. Also of being very aware of the importance of having corrective mechanisms to address historic inequality. So in this mechanisms that have to be designed, that we bring the factors of correcting and to bringing local institutions and research ecosystems to engage continually with these systems. So that it’s not only of bringing a technology or shipping in equipment to build the centers, but to really understand how compute is used for which needs and that local institutions and organizations are deciding on this.


Fabro Steibel: Thank you, Elena. And thank you for bringing the researchers and universities again to the topic. And I like the position you made from a technical solution to a fairness solution. It’s very difficult to define what fairness is, but it’s certainly something we have to pursue and define. So I like it very much. Ivy, do you want to go next?


Ivy Lau-Schindewolf: Sure. Yeah, it’s kind of hard to go after, you know, Elena. And that was a very, very good point and compelling. You know OpenAIR is just one company in this big world, in this ecosystem, but I think we will share what we see from our own vantage point and experience. I think what I couldn’t agree with more from both the Gavi example and what my co-panelists have shared is importance of working across sectors. And what I mean by that is we as a single company, for example, have learned from our Stargate experience that that definitely could not be accomplished by one party alone. And let me just maybe take a moment to explain what Stargate is and who is involved in it and why that convinced me that this has to be a multi-stakeholder solution. So Stargate is a 500 billion dollar infrastructure project, essentially, over the next four years. And we have started building data centers in Abilene, Texas. And this is who and how it all came together. Like Gavi, there is one group of partners that are technology and operations focused. So that includes OpenAI, Microsoft, NVIDIA, Oracle, ARM. And then we need another group of partners that can be finance focused. And for us, in this case, for Stargate, it’s SoftBank. They take up the financing responsibility. And the third group, very much like Gavi, how the responsibilities are distributed and structured, is the political side. This is something we are working with state governments on. And it extends to what we do internationally. And this is what we have talked to the U.S. government about. And I think another possibility I want to posit here is we have heard from a lot of countries that what they really, really want, or they’re not mutually exclusive. I think they want to make sure there is access to the benefits that compute and compute enabled models bring. And so in addition to thinking about infrastructure, we are also therefore very focused on how we can make access to the solutions that compute enable available. And that’s why when we launched OpenAI for countries, it is not copy paste Stargate everywhere. It actually is much more expansive than that. We think about how we work with different education partners, universities, schools, how we increase AI literacy. So we’re not just investing on the stuff, the hardware, right? We’re investing in people as well. We make our products freely available. We have a partnership with WhatsApp. So in low connectivity settings, people can still access the benefits of intelligence that compute enables. And all of that is something that, you know, like as one company, this is what we’re thinking about doing. But we also know that we need funding partners. We need operations partners, technology partners, political partners, very much like Gavi, so that this is something we can coordinate and offer to the world from very different levels of the stack, at the infrastructure level, at the solutions level, and at like even like a people level, so that we are as a society evolving along with the technology.


Fabro Steibel: Thank you. And that reminds me that bridging the compute power might or might not involve localities. So we can process data remotely. we have done. Brazil has a partnership with Spain for supercomputers. Finland has one. Estonia, I reckon, has one as well. So, we can solve problems where the energy is a possibility, where water is a possibility, but also where the supply and demand, it’s another part of the scale and can be achieved. Thank you. So, I move now to the third question of the panel. So, how can Global South nations shape AI governance and infrastructure policies to reduce dependence of foreign compute providers and build sustainable local AI capacity? What lessons can be learned from the Dipsy case or other cases that kind of rephrase it, how we ask questions about computational power? Jason, you want to start?


Jason Slater: No. No, thank you. I’m still wanting to answer the previous question. Tell us about coffee. Tell us about the coffee. Shall I? Because that may be something that can be, I think it’s actually very relevant to this when you’re talking about the Global South. So, I talked about tomatoes before. What we did last year on the sidelines of the General Assembly, we basically built what we call a Lighthouse solution. It links to AI. It was a consortium of two governments. So, Italy, Ethiopia, Ilia Levatsa, Google with one of its implementing partners, NGS and an international coffee organization. What was it we were trying to solve? There’s roughly around 3 billion cups of coffee drank a day. Ethiopia is the fifth largest coffee producers, the number one coffee producer in Africa. And there’s a new EU directive being issued called deforestation. And so, the challenge was, how can you leverage AI and digitalization to support coffee farmers? So, that’s why we built this consortium and brought everybody together. So, again, back to the GAVI example, the AIM Global, this was a very specific one. And there’s many, many other examples. That’s why it links nicely to what you mentioned with Stargate. It’s a huge case where people came together because you could not possibly Solve it as a UN agency or individually. So that was the coffee example. We brought it together. We have a solution We went to Addis. We actually met and Presented this to the coffee farmers association themselves. They didn’t like it. We had McKinsey present didn’t like it Why because they didn’t understand the incentive our land is not subjected to deforestation Only a couple two percent so but when one understands that in order to comply with this directive that it opens up of Opportunities along the supply chain that openly is about, you know, increasing productivity and what-have-you Then we tapped into the real incentive behind it So that’s another thing when we know that there are I when you when you talk about the usages etc And then GPUs and what-have-you I think about there’s so many faces out there where they don’t yet know how AI And digitalization is going to support them. So that was the very that was the coffee example Where are we now 12 months on? We are now looking to see whether we can actually implement some pilot that within Ethiopia in addition to that that consortium are opening up and an AI initiative in Italy Bringing those examples together primarily in the area of coffee. So that was the specific example. I’ll I’ll answer this question later


Fabro Steibel: I Think I Like very much the coffee example because it’s hands-on and in deforestation we bring the AI and climate Close to each other. So you have wonderful uses for mapping deforestation mapping disasters predicting climate and many other things and If you do it together, if you have elements of these that are open that can be shared It certainly is an asset for global global climate change so Could you go next? Yes.


Elena Estavillo Flores: Well, I’ve already said about something that I wanted to focus on in the importance of investing in people, in investing in local talent, and not just thinking of the hard work and and supporting community-driven research, because countries are innovating despite this infrastructure gaps. And this, I find this very interesting, you know, something that we always are, repeat now in Mexico, but that I believe that it is true of many other countries, is that given that we don’t have many resources, so then we have managed to develop ingenuity and contextual intelligence to find solutions with very limited resources. And I see that this is happening also in AI development, that we look at researchers, small developers in civil society that are experimenting and they are using open source and small scales, hybrid models and finding interesting developments. So I think that this is something to learn from and these are opportunities. And if this ingenuity is met with more infrastructure, because we definitely need it, and we can find these collaborative ways to find this infrastructure, then there is an opportunity to meeting. ingenuity with infrastructure. And and also, I find another thing that I find very interesting is that these regions of the world, not like like Latin America, are are pushing for more plural and justice center vision of AI. And also where we emphasize not only into an individual, but collective rights. And this can also help us build government, a strong governance for AI and reshaping this this AI, this governance for AI in a way that that protects these different models of innovation and different models of of protecting individual and collective rights.


Fabro Steibel: Thank you, Elena. And you remind me of how from the global south, we just have to be creative with scarce resources. We just have to hack it. We just have to to make it happen. If we level up the opportunities for certainly for sure, we’re going to have more better results. So, Ivy, do you want to go next? Sure.


Ivy Lau-Schindewolf: All this talk of infrastructure just reminded me that when I first landed in Oslo Airport and then took the train to the city, I’m like, wow, like when trains work, it is an amazing experience and it is such a great utility. And I wish my home city of San Francisco would offer the same thing. And I mentioned that example, because I think when we talk about chips and compute, sometimes it just, you know, we we might think of it as a different category, as other infrastructure that have built cities and nations. And when you ask the question, Fabio, about what to do about that lack of access. I think it’s kind of how we think about what everyone can do about the gap in supply and demand. How can we solve for the problem of access to the infrastructure and the problem of access to the benefits of infrastructure? If we can’t access to the infrastructure to the same degree, is there a creative way to access the benefits? And I want to maybe offer two more things in addition to talking about data centers. One is I think it’s important to incentivize and cultivate a vibrant local startup ecosystem. If you have chips and you have maybe some people who know how to use a tool, but there aren’t entrepreneurs and if you don’t incentivize and really cultivate that growth, then I feel like we’re not really facilitating access. We’re not facilitating innovation. We’re not facilitating access to the benefits of the technology. So I think that is really important. It’s like one of the prongs of what we offer when we say we’re launching OpenAI for countries. And then the other thing is kind of touching on what Elena also mentioned and what I said earlier about like investing in people. We launched OpenAI Academy earlier this year and we are just starting to scale. And if you haven’t seen some of our videos online already, I encourage you to do that. We have been in production like every week and they are for all sectors and they’re all for all skills levels. And sometimes we offer in-person events as well. And the main point here is not like, look, there are more free videos to watch. They are. And like, we’re proud to say that we have trained 1.4 million people already. But I but I think like to think when we think about access, there needs to be like a very concrete way to apply the technology. And we can’t do that if we don’t actually know what it is and what to do that with. I very much I’m excited that when we move from day zero to day one, well, maybe day one to day two, to be fair about all the progress we have seen already, that there will be a much more sophisticated and maybe even demanding like approach to like how we use the compute and the tools that the compute empower and not just talking about like the stuff itself.


Fabro Steibel: Thanks, Ivy. You reminded me that prompting brings humans able to easily talk to computers. So when you have descriptive AI, you usually need a team of technicians to code something. So we use the result. When we have generative AI, we regain this capacity to be a normal user, to make use of technology, which brings the issue of supply and demand to a great distress, because now we have way more need for technologies and uses. So Alison.


Alisson O’Beirne: Yeah, sure. Thanks. I think the question really is about how we include how we include Global South voices in conversations around AI. And I will continue to make it a policy not to tell the Global South how to do policy. But I will say that I think that one of the effective ways that we ensure that we have voices from the Global South in the conversation, and it really reflects back on what Elena was talking about in the Gavi model, is ensuring that we have. a multi-stakeholder approach and one that thinks about including a whole range of different potential partners. So when we’re talking about something like, you know, a global alliance for artificial intelligence, we have to think about ensuring that we’re talking to large and to small providers and users. We have to think about talking about both public and private space. Like it has to be the big AI institutions, the accelerators, the startups, the governments, all coming into the conversation together. That’ll be the most effective way for us to be able to make progress on ensuring equitable access to what’s needed for compute capacity. I’m going to give a little plug to a Canadian initiative in this regard because I can’t be stopped. What we have from our International Development Research Centre in Canada, IDRC, are working with the UK’s Foreign and Commonwealth Development Office that have committed sort of $10 million to the development of an equal compute network that strives to do exactly this. So strives to bring together a number of different partners and a number of different types of partners to think about equitable access to compute capacity. And one of the things that the network really hopes to achieve is by bringing together a number of different sort of partners from the global south together. It allows the possibility to kind of create a critical mass in order to obtain sort of better rates and better processes as they’re looking to establish compute capacity. When you have a number of different countries or a number of different institutions who are speaking together, they have greater power together than they can each individually have on their own. And that’s sort of the value of both the international or the global alliance model and also of the multi-stakeholder model. So it’s a nice little Canadian piece we’ve got.


Fabro Steibel: I like the Canadian piece. We have time for one last question. And one thing I like about Gavi is that they have a political group on how to share the assets. So imagine you buy 100 units of something and they will have to chip in, contribute, find consensus on how to share. So in the case of Gavi, part of it will go for countries that can pay for vaccines. So the model is sustainable. Part of the vaccines will go to countries that cannot pay for a vaccine so there’s access. Part goes to countries that are doing an effort to be more sustainable or to produce vaccines and so on. And all of these are political questions that address how to share. So if you can purchase, you can buy finance. If you know what we are buying, if it is a commodity that we can have it, there’s a political group that will decide on how to share it and mood stakeholder seems like a good approach to do that. So I’ll go for the last question so you can advance in one of the points that you made. Jason, for example, said a call to action in the first question if you want to bring back. If you want to bring from tomatoes to coffee to something else, feel free. And in case you have questions, I have further questions for you. So Jason, what would you like to expand?


Jason Slater: No, thank you. Well, linking it to what you just mentioned there and Gavi, we have, as I see a global digital compact in place, a pack for the future. We have a very clear way forward with those objectives. So that’s my first. That is a multi-stakeholder approach. That was governments, that was tech, that was academia. We all came together. We’re nearly 12 months in and there’s momentum. We under the objective number two regarding digital economy, which links to objective five AI, have a very clear call for solution to call for action in that links to what I mentioned before. We already have an alliance, a global alliance on AI for manufacturing, which has actually three pillars to it, which is one around smart manufacturing, which is not so much about tomatoes and coffee, but it’s when it starts to get a bit harder, you know, on the shop floor and how can you infuse AI in the production process. Then we have our AI lighthouse solutions that I just mentioned. The one that I, the example I gave was the one around coffee. We’re also building up around other products. I’m hoping we collaborate with open AI, by the way. and the third component and that as I mentioned is around digital economy that links it directly back so our mandate in UNIDO is to make sure that we implement what is being committed under the GDC and last but not least I there’s a point that Elena mentioned before that I also didn’t uh that I that I mentioned and you also mentioned in terms of regulation around um 4G 5G etc coming in is this open innovation we also have this program that supports that that again is not it’s not a UNIDO program it’s an open one that we convene we bring together this multi-stakeholder around innovation how can you help those great ideas those startups those innovators and bring corporates together as well and importantly investment there is a clear funding requirement here we come and go projects end but then so what so that’s basically what I just wanted to mention is in terms of this multi-stakeholder around the GDC the aim global and those components I also think as you mentioned there about this model for Gavi we are also trying to make sure there is a sustainable model in place that ensures that these great ideas that’s going to come becomes something that’s investable and sustainable so that it then can be ultimately replicated not just in Ethiopia on coffee but we can take it to Latin America you know tomatoes are grown in most places how can we come to COP and ensure that we bring a solution in place that helps mitigate climate change so that would be my call for action please please join us and we will do our utmost to promote the solutions that people are working on thank you.


Fabro Steibel: Thank you Jason and I was in Bonn last week for the SB 62 and the issue of how to use technology to advance climate agenda and how to make technology greener is top of mind maybe will be a big team for COP 30 this year so Helena moving to you I think we shared the challenges of talking from the civil society point of view And it’s always an interesting point of view, because we’re not government, we’re not companies, we’re kind of bringing issues to the debate. So what topic would you like to advance?


Elena Estavillo Flores: What topic? Well, governance, we have been talking about governance, but we’re supposing that the system already exists and we also need to create and to maintain the necessary incentives to create the model and then to sustain it. And one of it is to produce a model that is credible, that brings certainty and so that people trust it. No, the people trust it, and then we have this necessary trust to keep it together and to keep investment coming. And and the countries that invest in it have also need this, the right combination of of incentives to continue dedicating resources for this. And this this should come from different expectations. One is to to have a fair share of benefits from the model. And also to by the means of trust and credibility to keep believing now in this model that is bringing benefits to to the region. and Benefits for Inclusive Development and Sustainable Development. And that’s why I believe that the role of civil society as a component of this governance that produces trust and credibility is so important.


Fabro Steibel: Thank you. This reminds me that usually governments have an AI national plan for development. If I’m right, we see the Observatory ranked 81 countries that have a plan of action for AI. Brazil just released it last month. And one key thing is what kind of pillars we have in this development. So what Elena brings is that it needs to be multistakeholder. What you bring is that it needs to have open innovation. It has to be shared. And this is really interesting. So, Ivy, I go to you. I think one interesting point of view is that companies are very different from each other. So we think of them like the big techs or something like that, but they’re very different. The Americans are very different from Europeans, very different from Chinese. And if you look at one country, they’re very different. And all of them are trying to bring solutions for bridging the compute power from the perspective. So what point would you like to advance?


Ivy Lau-Schindewolf: Thank you for framing it that way, because I was sitting here thinking, you know, I really am not qualified to tell other people what to do. Right. We’re just one company. I can’t tell what governments want to do, what other companies to do and what all of global South should do. But from the perspective of just one company based in San Francisco, what I think I want to the one call to action, so to speak, is let’s use this technology to build a tool. And I’m here to talk to you about how we can use our tools to solve hard problems. I think, I don’t know if everyone has to worry about getting chips the same way. If all of us can get the benefits to the chips the same way. To accomplish that end is to make our tools accessible. That’s why we have, like I mentioned earlier, an integration with WhatsApp so that it’s workable even in low connectivity settings. We are rolling out open weights models later this summer. You mentioned the Brazil AI plan. And two of the examples that were mentioned in that plan were Favela GPT and Amazon GPT. Favela, you know, we made the tool usable in a lot of favelas. And we also have a partnership with the university and Amazon. So that our tool is enabling conservation and health insights that are already helping residents. And just to help preserve the largest rainforest on the planet. And this is the kind of story that really excites me. That we really, there is so much more to come. And that progress is, you know, it’s happening today, right now. And so let’s think about how we use what exists and what is to come. And to solve hard problems and advance the benefits of LLMs.


Fabro Steibel: Thanks, Ivy. And if you are online and want to post the questions, or if you are in the audience, please make a queue. After Alison’s contributions, we’re going to open the floor for participation.


Alisson O’Beirne: Perfect, thanks. I think, really following on from Ivy’s point, I think that… There is something very important to be said for international action to support equitability of access to compute capacity. I think that there’s a recognition, even among those who already sort of have good access to compute, there’s a recognition of the value of sort of sovereign abilities or sovereign access to compute capacity, even in kind of Western Europe and North America. And so it’s understandable that the Global South also ought to be able to play into that ecosystem, that there also ought to be an equitability of access there. But going beyond equitability of compute access, I think it’s also true that if we don’t have AI tools that are designed responsibly and that respond to the needs of local communities, access is not going to be sufficient. So having access to compute capacity doesn’t mean anything. If we think about equitability of access, we also have to be thinking about equitability of design, designing AI systems and tools that are free from bias, that reflect linguistic diversity, that are climate conscious in the way that they’re created. And then equitability of use as well, designing AI tools that protect the individuals that are really looking to seek the benefits of the use of artificial intelligence, that protect workers, that support development purposes, and that have benefits beyond kind of a small group that would already have access to kind of privileges in that regard. So I think really, as we think about equitability of access, it has to be part of a broader conversation about equitability in the design and the use of AI systems as well.


Fabro Steibel: Thank you, Alison. Let me check if there is anyone online or in the audience that want to make a comment. No? I’ll check here. So we go for a last question. And I think that I like the climate agenda and the relationship with it. And what you bring, Alison, is that compute power is the start of it. And the governments will have a huge role on how to control it or make it safer. make it more secure. And they have to relate. And there’s a huge challenge now for climate, which was a problem already. Now it’s a bigger problem. So as concluding remarks, what would you like to highlight or call to action?


Jason Slater: Again, I just underline what we’ve all in our different aspects been saying about is, you know, collaboration is absolutely critical in this moment now around AI, understanding what the needs of it are, what the computing power of it is, making sure that we don’t leave the global south, that the divide doesn’t get bigger, etc. So my final comment would just be that that, you know, let’s, let’s join this alliance. Because as Stargate mentioned, and by Ivy, as what’s going on now with the AI factory that’s being opened up in South Africa, collaboration between NVIDIA and Cassava, what’s going on in Italy, what’s happening in the country, I just left a few hours ago in Austria with AI Gigafactory. It is a consortium of people that’s coming together. And yes, taking all of those components around the ethics of AI, making sure that this is equitable, that it is transparent, it’s inclusive, it’s collaborative, it’s reliable, it’s safe, and it ensures privacy. Privacy was a big issue on the coffee example. I didn’t mention that before. So that would be my final one is just I could not underline any further than what we’ve all collectively said on all those various levels. All I would offer is enough from perspective of UNIDO, we do have a platform in place, I don’t say we’re happy to help and to support and join forces. Thank you.


Fabro Steibel: Thank you. Elena, do you want to go for


Elena Estavillo Flores: Yes, yes, of course. And I will build upon concluding remarks or highlights? on the same ideas. Because These technologies tend to concentrate and to build a scale, so countries, smaller countries or countries that have smaller access to resources have to collaborate, have to collaborate to attain enough scale to be part of the movement. Otherwise, we will have bigger gaps in our development. And so this collaboration, I believe that is a must right now, just to bridge gaps and to change this mode of development that has produced so many persistent gaps that will get wider. So that’s why collaboration has to be the new mode of development.


Fabro Steibel: Thank you. And I hope collaboration sparks for the global alliance from what is happening, could happen just outside of here. So Ivy, do you want to go for the concluding remarks?


Ivy Lau-Schindewolf: At the risk of just repeating myself and other people, I will actually still say there’s something maybe to the trifecta of political, financial and technical operational partners as the way we think about who should be at the table. So and I think that is necessary in all countries and in all fora. And so that we can truly, truly collaborate and take into account all the equities. Because we’re talking about a really a massive, massive scale of infrastructure and a massive, massive potential of transformation.


Fabro Steibel: Thank you. Alisson.


Alisson O’Beirne: Thank you. Always dangerous to go last for this thing. I think I really, I want to build on the idea of collaboration, which I think a lot of us have talked about today, and I think is super valuable as we’re discussing how to kind of encourage that sort of equitable access to the benefits of artificial intelligence. And one thing that I want to build on that concept of collaboration at the risk of being controversial is the need for collaboration to be in a spirit of openness and in a spirit of listening. And it doesn’t matter what kind of equitability challenge we’re talking about, whether it’s the sort of global south’s compute access, whether we’re talking about linguistic diversity in the internet or in AI, whether we’re talking about in Canada, Indigenous connectivity and Indigenous data sovereignty, when we are in these equitability conversations and when we are thinking about how we collaborate with partners, one of the biggest challenges that we see is that governments are not immune to this. We will often come to the table in a spirit of collaboration, meaning here’s my idea and everyone must agree with it in order for us to collaborate. And I think there really needs to be in a space like AI, where the technology is evolving and where our understanding of the capacities and the understanding of the benefits is evolving. We have to come in a spirit of listening and openness and in a spirit of compromise as well. If we want to have collective action, we are going to have to have compromise and we are going to have to have a recognition of the needs of others and a recognition that we don’t always understand the needs of folks who are outside our own context in our own community. So I think that is one of like, if I have a call to action, it really is to come to collaboration in a spirit of sometimes recognizing that maybe your own positioning is wrong or that you need to adjust your own approach in order to be able to meet the needs of other communities. If we’re not able to do that, then we won’t be able to take collective action on these equitability issues and they can’t be solved without collective action.


Fabro Steibel: Thank you. So, with this, I’ll do some concluding remarks and end the panel. I started the panel wondering if there is a need for a global alliance, and yes, not the global alliance, but many global alliances, a diversity of global alliances, and the global alliances, they’ll have shared problems, but also different communities, so they need to find this collaboration according to their communities. Maybe the community of IGF is different from COP, which is different from G20, and so on, and collaboration, off-collaboration, is really important. I still hope that we can increase the access to compute power in the global South somehow, either installing compute power locally, but or sharing compute power. I still hope that we can share compute powers amongst countries and in a collaboration, so Brazil has a collaboration with Spain, for example, for supercomputers, and I still hope that the issue of information society becomes enhanced by this access to a technology that can be transformative, but can also place risks and challenges. So, thank you very much for the participation, and we conclude. Thank you.


J

Jason Slater

Speech speed

188 words per minute

Speech length

2152 words

Speech time

686 seconds

Compute power is concentrated in only a few nations, creating compute deserts with no connectivity and significant skills gaps

Explanation

Jason argues that AI computing power is concentrated in roughly 30 nations, primarily the US and China, creating significant digital divides. He emphasizes that there are compute deserts with no connectivity and highlights the adoption challenges, particularly in Africa where AI and digital tools adoption is only 25%.


Evidence

Nearly 3 billion people globally are still unconnected; Africa has 25% adoption rate for AI and digital tools


Major discussion point

Barriers to Equitable Access to Computational Power


Topics

Development | Infrastructure


Agreed with

– Ivy Lau-Schindewolf
– Elena Estavillo Flores

Agreed on

Solutions must go beyond hardware infrastructure to include human capacity building


Multi-stakeholder frameworks like UNIDO’s AI for manufacturing alliance demonstrate the value of bringing diverse stakeholders together as trusted conveners

Explanation

Jason describes UNIDO’s AI for manufacturing global alliance launched two years ago, which brings together around 140 members from over 40 countries including academia, think tanks, and private sector companies. He emphasizes the UN’s role as a trusted advisor to convene stakeholders and focus on solution-oriented approaches.


Evidence

AI for manufacturing alliance has 140 members from 40+ countries including Google and Huawei; focuses on supporting manufacturing in developing and middle-income countries


Major discussion point

Lessons from Global Public Goods Initiatives


Topics

Development | Economic


Agreed with

– Ivy Lau-Schindewolf
– Elena Estavillo Flores
– Alisson O’Beirne

Agreed on

Multi-stakeholder collaboration is essential for addressing compute access challenges


Practical solutions like the Ethiopia coffee project show how consortiums can address specific local challenges while building capacity

Explanation

Jason describes a lighthouse solution involving Italy, Ethiopia, Google, and international coffee organizations to help Ethiopian coffee farmers comply with EU deforestation directives using AI. The project demonstrates how multi-stakeholder consortiums can address specific local needs while building AI capacity.


Evidence

Ethiopia is the 5th largest coffee producer globally and #1 in Africa; 3 billion cups of coffee consumed daily; new EU deforestation directive creates compliance challenges; project involved multiple governments, tech companies, and international organizations


Major discussion point

Building Local AI Capacity and Reducing Dependence


Topics

Development | Economic | Infrastructure


Disagreed with

– Ivy Lau-Schindewolf
– Elena Estavillo Flores

Disagreed on

Primary approach to solving compute access – infrastructure vs. access to benefits


The Global Digital Compact provides a clear framework for action through multi-stakeholder approaches linking digital economy and AI objectives

Explanation

Jason emphasizes that the Global Digital Compact endorsed in September provides a clear way forward with specific objectives. He highlights his role in vice-chairing objective number two on inclusive digital economy, which links to objective five on AI, and calls for implementing the commitments made under the compact.


Evidence

Global Digital Compact endorsed in September at UN General Assembly; includes objective 2 on digital economy and objective 5 on AI; involves governments, tech, and academia collaboration


Major discussion point

Governance and Sustainability Models


Topics

Development | Legal and regulatory


I

Ivy Lau-Schindewolf

Speech speed

144 words per minute

Speech length

1923 words

Speech time

796 seconds

The gap between supply and demand affects everyone, with underestimated inference demand creating global GPU shortages

Explanation

Ivy explains that even OpenAI underestimated the demand for inference computing power, initially thinking ChatGPT would be a low-key research preview but ending up with 100 million users in one month and now 500 million weekly active users. She argues that the problem isn’t just inequitable access but that everyone needs more computing power.


Evidence

ChatGPT gained 100 million users in one month; OpenAI now has 500 million weekly active users; CEO posted that ‘GPUs are melting’; inference demand was underestimated compared to training demand


Major discussion point

Barriers to Equitable Access to Computational Power


Topics

Infrastructure | Economic


Multi-sector collaboration is essential, as demonstrated by Stargate’s structure with technology, finance, and political partners

Explanation

Ivy describes Stargate as a $500 billion infrastructure project over four years that requires different types of partners: technology/operations partners (OpenAI, Microsoft, NVIDIA, Oracle, ARM), finance partners (SoftBank), and political partners (state and federal governments). She emphasizes that this multi-stakeholder approach is necessary for large-scale infrastructure projects.


Evidence

Stargate is a $500 billion project over 4 years; involves technology partners (OpenAI, Microsoft, NVIDIA, Oracle, ARM), finance partner (SoftBank), and government partnerships; building data centers in Abilene, Texas


Major discussion point

Lessons from Global Public Goods Initiatives


Topics

Infrastructure | Economic


Agreed with

– Jason Slater
– Elena Estavillo Flores
– Alisson O’Beirne

Agreed on

Multi-stakeholder collaboration is essential for addressing compute access challenges


Disagreed with

– Elena Estavillo Flores

Disagreed on

Role of private sector vs. government investment in compute infrastructure


Cultivating vibrant startup ecosystems and investing in people through education programs are essential beyond just hardware infrastructure

Explanation

Ivy argues that having chips and compute power isn’t sufficient without entrepreneurs and proper incentives for innovation. She emphasizes the importance of investing in people through programs like OpenAI Academy and creating accessible tools that work in low-connectivity settings.


Evidence

OpenAI for countries includes education partnerships with universities and schools; OpenAI Academy has trained 1.4 million people; partnership with WhatsApp for low-connectivity access


Major discussion point

Building Local AI Capacity and Reducing Dependence


Topics

Development | Sociocultural


Agreed with

– Jason Slater
– Elena Estavillo Flores

Agreed on

Solutions must go beyond hardware infrastructure to include human capacity building


Making AI tools accessible through various means, including low-connectivity solutions, can provide benefits without requiring local compute infrastructure

Explanation

Ivy suggests that access to the benefits of compute-enabled AI can be achieved through creative solutions even without local infrastructure. She emphasizes making tools accessible through partnerships like WhatsApp integration and releasing open weights models to enable broader access to AI capabilities.


Evidence

WhatsApp integration for low-connectivity settings; open weights models being released; Brazil AI plan mentions Favela GPT and Amazon GPT as examples of accessible applications


Major discussion point

Governance and Sustainability Models


Topics

Development | Infrastructure


Disagreed with

– Elena Estavillo Flores
– Jason Slater

Disagreed on

Primary approach to solving compute access – infrastructure vs. access to benefits


E

Elena Estavillo Flores

Speech speed

98 words per minute

Speech length

1302 words

Speech time

790 seconds

Infrastructure barriers are compounded by lack of private investment in Latin America, where governments lack sufficient resources for necessary investments

Explanation

Elena explains that Latin America faces a reinforcing cycle where lack of basic infrastructure and compute power prevents companies, scientists, and startups from developing region-specific AI solutions. She notes that while the US relies on private investment for compute infrastructure, Latin American governments don’t have sufficient resources for such large investments.


Evidence

In the US, private companies make most compute infrastructure investments; Latin American governments lack resources for large infrastructure investments; creates a reinforcing cycle limiting regional AI development


Major discussion point

Barriers to Equitable Access to Computational Power


Topics

Development | Economic | Infrastructure


Agreed with

– Jason Slater
– Alisson O’Beirne

Agreed on

The compute divide creates self-perpetuating disadvantages that require collective action


Disagreed with

– Ivy Lau-Schindewolf

Disagreed on

Role of private sector vs. government investment in compute infrastructure


Inclusive governance models with meaningful civil society participation ensure fairness over pure technical efficiency

Explanation

Elena emphasizes that successful models like GAVI work because decisions aren’t shaped solely by wealthy nations or corporate interests, but include meaningful multi-stakeholder participation. She argues that civil society plays a critical role in bringing transparency, monitoring, and keeping equity central to decision-making processes.


Evidence

GAVI’s success attributed to inclusive governance preventing domination by small groups of wealthy nations or corporations; civil society provides transparency and monitoring for fair distribution


Major discussion point

Lessons from Global Public Goods Initiatives


Topics

Legal and regulatory | Human rights principles


Agreed with

– Jason Slater
– Ivy Lau-Schindewolf
– Alisson O’Beirne

Agreed on

Multi-stakeholder collaboration is essential for addressing compute access challenges


Local ingenuity and contextual intelligence in resource-constrained environments create opportunities when combined with better infrastructure

Explanation

Elena argues that countries with limited resources have developed ingenuity and contextual intelligence to find solutions, including in AI development using open source and small-scale hybrid models. She sees this as an opportunity where meeting local ingenuity with better infrastructure through collaborative efforts could yield significant results.


Evidence

Researchers and developers in resource-constrained environments are experimenting with open source and small-scale hybrid models; Mexico and other countries develop solutions with limited resources


Major discussion point

Building Local AI Capacity and Reducing Dependence


Topics

Development | Sociocultural


Agreed with

– Jason Slater
– Ivy Lau-Schindewolf

Agreed on

Solutions must go beyond hardware infrastructure to include human capacity building


Credible governance models require trust-building mechanisms and fair benefit-sharing to maintain long-term participation and investment

Explanation

Elena emphasizes that sustainable collaborative models need credibility and trust to maintain participation and continued investment. She argues that countries need the right combination of incentives, including fair benefit-sharing and confidence in the model’s ability to deliver inclusive and sustainable development benefits.


Evidence

Trust and credibility are necessary for maintaining investment and participation; fair benefit-sharing creates proper incentives for continued resource dedication


Major discussion point

Governance and Sustainability Models


Topics

Legal and regulatory | Development


A

Alisson O’Beirne

Speech speed

202 words per minute

Speech length

1596 words

Speech time

473 seconds

Geographic cost differences and market concentration create self-perpetuating disadvantages for emerging economies

Explanation

Alisson explains that the cost of creating compute capacity varies by region due to infrastructure and latency issues, while current compute capacity is concentrated among very few providers who focus on North American and Western European markets. These barriers compound and become self-perpetuating, leaving those already behind further disadvantaged as demand increases in areas that already have capacity.


Evidence

Cost of compute capacity varies by region due to infrastructure and latency; compute capacity concentrated among few providers focused on North American and Western European markets


Major discussion point

Barriers to Equitable Access to Computational Power


Topics

Development | Economic | Infrastructure


Agreed with

– Jason Slater
– Elena Estavillo Flores

Agreed on

The compute divide creates self-perpetuating disadvantages that require collective action


Critical mass through collective action gives countries greater negotiating power than individual efforts

Explanation

Alisson argues that bringing together multiple partners from the Global South creates critical mass that allows for better rates and processes when establishing compute capacity. She emphasizes that countries and institutions have greater power when speaking together than they can achieve individually.


Evidence

Canada’s IDRC and UK’s Foreign and Commonwealth Development Office committed $10 million to equal compute network; network aims to create critical mass for better negotiating power


Major discussion point

Lessons from Global Public Goods Initiatives


Topics

Development | Economic


Multi-stakeholder approaches including diverse partners from public and private sectors ensure Global South voices in AI governance

Explanation

Alisson emphasizes that effective inclusion of Global South voices requires multi-stakeholder approaches that include large and small providers and users, both public and private sectors, including big AI institutions, accelerators, startups, and governments. She advocates for this comprehensive approach to ensure equitable access to compute capacity.


Evidence

Need to include large and small providers/users, public and private sectors, big AI institutions, accelerators, startups, and governments in conversations


Major discussion point

Building Local AI Capacity and Reducing Dependence


Topics

Legal and regulatory | Development


Agreed with

– Jason Slater
– Ivy Lau-Schindewolf
– Elena Estavillo Flores

Agreed on

Multi-stakeholder collaboration is essential for addressing compute access challenges


Equitable access must encompass design and use of AI systems, not just compute capacity, including bias-free and linguistically diverse tools

Explanation

Alisson argues that equitable access goes beyond just compute capacity to include equitable design and use of AI systems. She emphasizes the need for AI tools that are free from bias, reflect linguistic diversity, are climate conscious, protect workers, and support development purposes beyond privileged groups.


Evidence

Need for AI systems free from bias, reflecting linguistic diversity, climate conscious design, worker protection, and broader development benefits


Major discussion point

Governance and Sustainability Models


Topics

Human rights principles | Sociocultural | Development


F

Fabro Steibel

Speech speed

158 words per minute

Speech length

2212 words

Speech time

836 seconds

Brazil has only 1% of global data centers and 0.2% of computational power, highlighting access challenges

Explanation

Fabro presents specific statistics showing Brazil’s limited share of global compute infrastructure, representing half of Latin America’s total. He uses this as evidence of the significant access challenges facing countries in the Global South and the need for solutions to bridge the compute divide.


Evidence

Brazil has 1% of global data centers (half of Latin America’s total) and 0.2% of computational power according to EAA numbers


Major discussion point

Barriers to Equitable Access to Computational Power


Topics

Development | Infrastructure


Political mechanisms for fair distribution of resources are crucial components of successful global alliances

Explanation

Fabro explains that GAVI’s success includes a political group that decides how to share purchased vaccines among different categories of countries – those that can pay, those that cannot pay, and those making sustainability efforts. He emphasizes that these political decisions about fair distribution are essential for any global alliance model.


Evidence

GAVI has three groups: one for funding/accountability, one for technical definitions, and one for distribution decisions; distributes vaccines to paying countries, non-paying countries, and those making sustainability efforts


Major discussion point

Lessons from Global Public Goods Initiatives


Topics

Legal and regulatory | Development


National AI development plans must incorporate multistakeholder governance and open innovation principles

Explanation

Fabro notes that 81 countries have national AI plans according to observatory rankings, with Brazil releasing its plan recently. He emphasizes that these plans need to incorporate multistakeholder governance, open innovation, and sharing mechanisms as key pillars for development.


Evidence

Observatory ranked 81 countries with AI national plans; Brazil released its plan last month; plans need multistakeholder and open innovation pillars


Major discussion point

Building Local AI Capacity and Reducing Dependence


Topics

Legal and regulatory | Development


Collaboration must occur in a spirit of openness and compromise, recognizing diverse community needs across different forums

Explanation

Fabro concludes that multiple global alliances are needed rather than a single one, with different communities requiring different collaborative approaches. He emphasizes that collaboration should be tailored to different forums like IGF, COP, and G20, while maintaining the principle of shared problem-solving across diverse communities.


Evidence

Different communities need different approaches – IGF community differs from COP and G20; Brazil has collaboration with Spain for supercomputers as example of international cooperation


Major discussion point

Governance and Sustainability Models


Topics

Legal and regulatory | Development


Agreements

Agreement points

Multi-stakeholder collaboration is essential for addressing compute access challenges

Speakers

– Jason Slater
– Ivy Lau-Schindewolf
– Elena Estavillo Flores
– Alisson O’Beirne

Arguments

Multi-stakeholder frameworks like UNIDO’s AI for manufacturing alliance demonstrate the value of bringing diverse stakeholders together as trusted conveners


Multi-sector collaboration is essential, as demonstrated by Stargate’s structure with technology, finance, and political partners


Inclusive governance models with meaningful civil society participation ensure fairness over pure technical efficiency


Multi-stakeholder approaches including diverse partners from public and private sectors ensure Global South voices in AI governance


Summary

All speakers agreed that addressing compute access requires bringing together diverse stakeholders including governments, private sector, academia, and civil society organizations, with each contributing different capabilities and perspectives


Topics

Development | Legal and regulatory


The compute divide creates self-perpetuating disadvantages that require collective action

Speakers

– Jason Slater
– Elena Estavillo Flores
– Alisson O’Beirne

Arguments

Compute power is concentrated in only a few nations, creating compute deserts with no connectivity and significant skills gaps


Infrastructure barriers are compounded by lack of private investment in Latin America, where governments lack sufficient resources for necessary investments


Geographic cost differences and market concentration create self-perpetuating disadvantages for emerging economies


Summary

Speakers agreed that the concentration of compute power creates reinforcing cycles of disadvantage where those already behind fall further behind, requiring coordinated intervention to break these patterns


Topics

Development | Economic | Infrastructure


Solutions must go beyond hardware infrastructure to include human capacity building

Speakers

– Jason Slater
– Ivy Lau-Schindewolf
– Elena Estavillo Flores

Arguments

Compute power is concentrated in only a few nations, creating compute deserts with no connectivity and significant skills gaps


Cultivating vibrant startup ecosystems and investing in people through education programs are essential beyond just hardware infrastructure


Local ingenuity and contextual intelligence in resource-constrained environments create opportunities when combined with better infrastructure


Summary

All three speakers emphasized that addressing the compute divide requires investing in people, skills development, and local innovation ecosystems, not just physical infrastructure


Topics

Development | Sociocultural


Similar viewpoints

Both speakers emphasized the critical importance of ensuring Global South voices are meaningfully included in AI governance structures, with Elena focusing on civil society’s role in ensuring fairness and Alisson emphasizing comprehensive multi-stakeholder inclusion

Speakers

– Elena Estavillo Flores
– Alisson O’Beirne

Arguments

Inclusive governance models with meaningful civil society participation ensure fairness over pure technical efficiency


Multi-stakeholder approaches including diverse partners from public and private sectors ensure Global South voices in AI governance


Topics

Legal and regulatory | Development


Both speakers provided concrete examples of large-scale collaborative projects that demonstrate how different types of partners (technical, financial, political) must work together to achieve infrastructure goals

Speakers

– Jason Slater
– Ivy Lau-Schindewolf

Arguments

Practical solutions like the Ethiopia coffee project show how consortiums can address specific local challenges while building capacity


Multi-sector collaboration is essential, as demonstrated by Stargate’s structure with technology, finance, and political partners


Topics

Infrastructure | Economic


Both speakers emphasized that sustainable collaborative models require building trust and creating fair mechanisms for participation and benefit-sharing, with collective action providing stronger negotiating positions than individual country efforts

Speakers

– Elena Estavillo Flores
– Alisson O’Beirne

Arguments

Credible governance models require trust-building mechanisms and fair benefit-sharing to maintain long-term participation and investment


Critical mass through collective action gives countries greater negotiating power than individual efforts


Topics

Legal and regulatory | Development | Economic


Unexpected consensus

Universal supply and demand gap affecting all regions

Speakers

– Ivy Lau-Schindewolf
– Elena Estavillo Flores

Arguments

The gap between supply and demand affects everyone, with underestimated inference demand creating global GPU shortages


Infrastructure barriers are compounded by lack of private investment in Latin America, where governments lack sufficient resources for necessary investments


Explanation

Unexpectedly, both a major AI company representative and a Global South policy expert agreed that the compute shortage is a universal problem rather than just an equity issue, with Ivy noting that even OpenAI faces supply constraints and Elena acknowledging the global nature of insufficient compute power


Topics

Infrastructure | Economic


Importance of local innovation and contextual solutions

Speakers

– Elena Estavillo Flores
– Jason Slater

Arguments

Local ingenuity and contextual intelligence in resource-constrained environments create opportunities when combined with better infrastructure


Practical solutions like the Ethiopia coffee project show how consortiums can address specific local challenges while building capacity


Explanation

There was unexpected consensus between a civil society representative and a UN official on the value of locally-driven innovation, with both emphasizing that solutions must be contextually relevant rather than one-size-fits-all approaches


Topics

Development | Sociocultural


Overall assessment

Summary

The speakers demonstrated remarkably high consensus on the need for multi-stakeholder collaboration, the self-perpetuating nature of the compute divide, and the importance of human capacity building alongside infrastructure development. There was also strong agreement on governance principles emphasizing inclusivity and fairness.


Consensus level

High consensus with complementary perspectives rather than conflicting viewpoints. The speakers represented different sectors (UN, government, private sector, civil society) but shared fundamental agreement on problem diagnosis and solution approaches. This strong consensus suggests viable pathways for implementing global alliance models for AI compute access, with each sector bringing necessary but different capabilities to the collaboration.


Differences

Different viewpoints

Primary approach to solving compute access – infrastructure vs. access to benefits

Speakers

– Ivy Lau-Schindewolf
– Elena Estavillo Flores
– Jason Slater

Arguments

Making AI tools accessible through various means, including low-connectivity solutions, can provide benefits without requiring local compute infrastructure


Infrastructure barriers are compounded by lack of private investment in Latin America, where governments lack sufficient resources for necessary investments


Practical solutions like the Ethiopia coffee project show how consortiums can address specific local challenges while building capacity


Summary

Ivy emphasizes making AI tools accessible without necessarily requiring local infrastructure, focusing on creative solutions like WhatsApp integration. Elena and Jason emphasize the need for actual infrastructure development and local capacity building, with Elena particularly stressing the investment challenges in Latin America.


Topics

Development | Infrastructure | Economic


Role of private sector vs. government investment in compute infrastructure

Speakers

– Ivy Lau-Schindewolf
– Elena Estavillo Flores

Arguments

Multi-sector collaboration is essential, as demonstrated by Stargate’s structure with technology, finance, and political partners


Infrastructure barriers are compounded by lack of private investment in Latin America, where governments lack sufficient resources for necessary investments


Summary

Ivy presents a model where private sector leads with companies like OpenAI, Microsoft, and SoftBank taking primary roles, while Elena argues that Latin America lacks sufficient private investment and governments don’t have adequate resources, suggesting need for different collaborative models.


Topics

Economic | Development | Infrastructure


Unexpected differences

Framing of the core problem – supply shortage vs. access inequality

Speakers

– Ivy Lau-Schindewolf
– Elena Estavillo Flores
– Fabro Steibel

Arguments

The gap between supply and demand affects everyone, with underestimated inference demand creating global GPU shortages


Infrastructure barriers are compounded by lack of private investment in Latin America, where governments lack sufficient resources for necessary investments


Brazil has only 1% of global data centers and 0.2% of computational power, highlighting access challenges


Explanation

Unexpectedly, there’s disagreement on whether the fundamental issue is global supply shortage (Ivy’s position) versus unequal distribution and access (Elena and Fabro’s position). This is significant because it affects whether solutions should focus on increasing overall supply or redistributing existing capacity.


Topics

Infrastructure | Economic | Development


Overall assessment

Summary

The main areas of disagreement center on: 1) Whether to prioritize infrastructure development vs. tool accessibility, 2) The appropriate balance between private sector leadership vs. government/multilateral coordination, and 3) Whether the core problem is supply shortage vs. access inequality


Disagreement level

Low to moderate disagreement level. While speakers have different emphases and approaches, they share fundamental agreement on the need for collaboration and addressing compute access challenges. The disagreements are more about strategy and implementation rather than fundamental goals, which suggests potential for finding common ground and complementary approaches rather than conflicting solutions.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized the critical importance of ensuring Global South voices are meaningfully included in AI governance structures, with Elena focusing on civil society’s role in ensuring fairness and Alisson emphasizing comprehensive multi-stakeholder inclusion

Speakers

– Elena Estavillo Flores
– Alisson O’Beirne

Arguments

Inclusive governance models with meaningful civil society participation ensure fairness over pure technical efficiency


Multi-stakeholder approaches including diverse partners from public and private sectors ensure Global South voices in AI governance


Topics

Legal and regulatory | Development


Both speakers provided concrete examples of large-scale collaborative projects that demonstrate how different types of partners (technical, financial, political) must work together to achieve infrastructure goals

Speakers

– Jason Slater
– Ivy Lau-Schindewolf

Arguments

Practical solutions like the Ethiopia coffee project show how consortiums can address specific local challenges while building capacity


Multi-sector collaboration is essential, as demonstrated by Stargate’s structure with technology, finance, and political partners


Topics

Infrastructure | Economic


Both speakers emphasized that sustainable collaborative models require building trust and creating fair mechanisms for participation and benefit-sharing, with collective action providing stronger negotiating positions than individual country efforts

Speakers

– Elena Estavillo Flores
– Alisson O’Beirne

Arguments

Credible governance models require trust-building mechanisms and fair benefit-sharing to maintain long-term participation and investment


Critical mass through collective action gives countries greater negotiating power than individual efforts


Topics

Legal and regulatory | Development | Economic


Takeaways

Key takeaways

There is a critical need for multiple global alliances for AI compute access, similar to GAVI’s model for vaccines, but adapted to different communities and contexts


The compute divide is both a supply problem (insufficient global capacity) and an access problem (inequitable distribution), affecting even developed nations


Multi-stakeholder collaboration involving technology, finance, and political partners is essential for addressing compute access challenges


Local capacity building must go beyond infrastructure to include skills development, startup ecosystems, and culturally relevant AI solutions


Equitable access encompasses not just compute power but also equitable design and use of AI systems that reflect linguistic diversity and local needs


Alternative approaches like remote compute access and tool accessibility can provide AI benefits without requiring local infrastructure investment


Civil society plays a crucial role in ensuring governance models remain credible, transparent, and focused on fair benefit distribution


Successful collaboration requires openness, compromise, and recognition of diverse community needs rather than imposing single solutions


Resolutions and action items

Jason Slater called for joining UNIDO’s existing AI for manufacturing global alliance with 140 members from 40+ countries


Participants encouraged to engage with the Global Digital Compact framework for implementing multi-stakeholder AI solutions


OpenAI committed to expanding their ‘OpenAI for countries’ program and Academy training (already reaching 1.4 million people)


Canada’s IDRC and UK’s Foreign Commonwealth Development Office committed $10 million to develop an ‘equal compute network’


UNIDO to continue developing AI lighthouse solutions beyond the Ethiopia coffee project to other regions and use cases


Participants agreed to explore replicating successful consortium models like Stargate for international compute infrastructure projects


Unresolved issues

How to quantify actual compute demand versus perceived need in different regions and sectors


Specific mechanisms for fair distribution of compute resources among participating countries in a global alliance


Sustainable financing models that balance private investment with public sector participation in developing countries


Technical standards and interoperability requirements for shared compute infrastructure across borders


Governance structures that can effectively balance efficiency with equity in resource allocation decisions


How to address the climate impact of increased compute infrastructure while meeting development needs


Specific metrics and accountability mechanisms for measuring equitable access and benefit distribution


Integration challenges between different national AI development plans and international collaboration frameworks


Suggested compromises

Accepting that not every country needs local compute infrastructure if they can access benefits through remote processing and tool accessibility


Balancing market-driven efficiency with equity considerations through hybrid public-private partnership models


Combining infrastructure investment with people-focused programs (education, startups, local innovation) rather than hardware-only approaches


Allowing for diverse governance models across different regional alliances while maintaining interoperability and shared principles


Recognizing that collaboration requires adjusting individual country positions and approaches to meet collective needs


Accepting that multiple global alliances may be needed for different communities rather than seeking one universal solution


Integrating climate considerations with development needs rather than treating them as competing priorities


Thought provoking comments

The problem isn’t just inequitable access. The problem is everyone needs more. How do we solve for the gap between supply and demand everywhere?

Speaker

Ivy Lau-Schindewolf


Reason

This comment fundamentally reframed the entire discussion by challenging the basic assumption that the issue is primarily about distribution of existing resources. Instead, it highlighted that even developed nations face compute scarcity, shifting the focus from a North-South divide to a universal supply-demand crisis.


Impact

This reframing influenced subsequent speakers to acknowledge the dual nature of the problem – both scarcity and inequitable access. It moved the conversation away from a simple redistribution model toward more complex solutions involving capacity building and creative access mechanisms.


We don’t have enough access to basic infrastructure… But then this comes over all of this, the companies, the scientists, the academia, the startups that could produce more services, more AI. They have this barrier because there’s not enough compute power so that they can develop the AI that is focused on the region, culture, needs… So this is just something, it’s like a circle that keeps reinforcing itself.

Speaker

Elena Estavillo Flores


Reason

This insight identified the self-perpetuating nature of the compute divide, showing how lack of access creates a vicious cycle that prevents local innovation and cultural adaptation of AI technologies. It connected infrastructure gaps to broader issues of technological sovereignty and cultural representation.


Impact

This comment deepened the discussion by introducing the concept of compound disadvantages and helped other panelists recognize that the problem extends beyond mere access to include innovation capacity and cultural relevance. It influenced later discussions about the need for local talent development and community-driven research.


Given that we don’t have many resources, so then we have managed to develop ingenuity and contextual intelligence to find solutions with very limited resources… if this ingenuity is met with more infrastructure… then there is an opportunity to meeting ingenuity with infrastructure.

Speaker

Elena Estavillo Flores


Reason

This comment flipped the narrative from deficit-focused to asset-based thinking, highlighting how resource constraints in the Global South have fostered innovation and creativity. It suggested that the solution isn’t just about providing resources but about amplifying existing capabilities.


Impact

This perspective shift influenced the conversation to consider Global South countries not just as recipients of aid but as sources of innovation. It contributed to a more nuanced understanding of collaboration that values different forms of intelligence and problem-solving approaches.


Some of those barriers are not only complex, but they’re also, they’re compounding, they’re self-perpetuating. So as folks are left behind and as there’s a lack in compute capacity… those that are already behind the game are going to be left further and further behind because as the demand increases in those places that already have compute capacity, we’re going to see just a continuation of response to that instead of a more equitable approach.

Speaker

Alisson O’Beirne


Reason

This comment introduced a temporal dimension to the inequality problem, showing how current disparities will exponentially worsen over time without intervention. It highlighted the urgency of action and the inadequacy of market-based solutions alone.


Impact

This insight reinforced the need for proactive international cooperation and helped justify why market forces alone cannot solve the equity problem. It strengthened arguments for coordinated global action and influenced the discussion toward more interventionist approaches.


Going beyond equitability of compute access… if we don’t have AI tools that are designed responsibly and that respond to the needs of local communities, access is not going to be sufficient. So having access to compute capacity doesn’t mean anything. If we think about equitability of access, we also have to be thinking about equitability of design… and equitability of use as well.

Speaker

Alisson O’Beirne


Reason

This comment expanded the scope of the discussion beyond infrastructure to include the entire AI development and deployment pipeline. It introduced the concept that true equity requires consideration of design, cultural relevance, and end-user needs, not just computational resources.


Impact

This broadened the conversation significantly, moving from a narrow focus on compute resources to a holistic view of AI equity. It influenced other speakers to consider the full ecosystem of AI development and helped establish that technical solutions alone are insufficient without addressing social and cultural dimensions.


We have to come in a spirit of listening and openness and in a spirit of compromise as well… We are going to have to have a recognition of the needs of others and a recognition that we don’t always understand the needs of folks who are outside our own context in our own community.

Speaker

Alisson O’Beirne


Reason

This comment addressed the meta-challenge of how to actually achieve meaningful collaboration, acknowledging that good intentions aren’t enough and that successful partnerships require humility and genuine openness to different perspectives and needs.


Impact

This served as a crucial reality check for the entire discussion, grounding the technical and policy conversations in the practical challenges of cross-cultural and cross-sector collaboration. It provided a framework for how the proposed global alliances should actually operate.


Overall assessment

These key comments fundamentally transformed the discussion from a relatively straightforward resource allocation problem into a complex, multi-dimensional challenge requiring systemic thinking. The conversation evolved from initial assumptions about redistribution of existing compute resources to a more sophisticated understanding that encompasses supply creation, cultural adaptation, innovation ecosystems, and collaborative governance. The most impactful insights came from reframing the problem (universal scarcity vs. just inequity), recognizing systemic barriers (self-reinforcing cycles), and identifying assets in unexpected places (Global South ingenuity). These comments collectively elevated the discussion from technical solutions to broader questions of equity, sovereignty, and sustainable development, ultimately making the case for why simple technology transfer is insufficient and why genuine partnership and systemic change are necessary.


Follow-up questions

How much compute demand do countries really need and can this be quantified?

Speaker

Ivy Lau-Schindewolf


Explanation

OpenAI has received outreach from countries asking for help quantifying their actual compute needs, indicating this is a critical gap in understanding that affects planning and resource allocation


How can we measure and address the gap between supply and demand for compute power globally?

Speaker

Ivy Lau-Schindewolf


Explanation

Even developed countries like the US are struggling with compute shortages, suggesting the problem extends beyond just inequitable distribution to overall supply constraints


What are the most effective mechanisms for ensuring fair distribution of compute resources while maintaining technical efficiency?

Speaker

Elena Estavillo Flores


Explanation

There’s a need to balance technical optimization with social fairness in compute distribution, requiring research into governance models that can achieve both goals


How can we design corrective mechanisms to address historic inequalities in compute access?

Speaker

Elena Estavillo Flores


Explanation

Historical digital divides are compounding with AI compute divides, requiring specific interventions to prevent further marginalization of already disadvantaged regions


What is the optimal balance between local compute infrastructure and remote access to compute resources?

Speaker

Fabro Steibel


Explanation

The discussion raised questions about whether countries need local compute capacity or if remote access through partnerships (like Brazil-Spain collaboration) could be sufficient


How can we better integrate private sector investment with government resources for compute infrastructure in developing countries?

Speaker

Elena Estavillo Flores


Explanation

Government resources alone are insufficient for the massive investments needed, requiring research into public-private partnership models for compute infrastructure


What are the most effective ways to cultivate local startup ecosystems and entrepreneurship around AI in the Global South?

Speaker

Ivy Lau-Schindewolf


Explanation

Access to compute infrastructure alone is insufficient without local innovation ecosystems to utilize it effectively


How can we measure and replicate the ‘ingenuity with limited resources’ innovations happening in the Global South?

Speaker

Elena Estavillo Flores


Explanation

There’s recognition that resource constraints are driving innovation in the Global South, but more research is needed on how to scale and support these approaches


What are the most effective models for multi-stakeholder governance in global compute resource allocation?

Speaker

Alisson O’Beirne


Explanation

The complexity of compute resource allocation requires bringing together diverse stakeholders, but the optimal governance structures need further development


How can we ensure AI tools are designed to reflect linguistic diversity and local community needs?

Speaker

Alisson O’Beirne


Explanation

Equitable access to compute must be coupled with equitable design of AI systems, requiring research into inclusive AI development practices


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology

Lightning Talk #91 Inclusion of the Global Majority in C2pa Technology

Session at a glance

Summary

This discussion focused on C2PA (Coalition for Content Provenance and Authenticity), an open standard for content authenticity and provenance, presented by BBC Media Action and BBC R&D representatives along with international media partners. Muge Ozkaptan from BBC Media Action introduced the session, explaining how their organization supports media outlets in 30 countries with digital transformation and AI adoption, particularly through their “Pursuit of Truth” initiative supporting 30,000 media professionals and 1,000 media outlets in fragile environments.


Charlie Halford from BBC R&D explained that C2PA addresses the growing problem of disinformation by attaching cryptographic signatures to content, similar to website security certificates, allowing users to verify the origin and authenticity of media. He demonstrated how fake BBC content has been created simply by adding BBC logos and graphics to misleading information, highlighting the need for content verification technology. The BBC’s research showed that when audiences were provided with C2PA transparency data about content origins, they demonstrated significantly higher trust levels, particularly among users who weren’t regular BBC website visitors.


International perspectives came from media partners facing real-world challenges with disinformation. Khalifa Said Rashid from Tanzania’s Chanzo digital outlet described problems with brand impersonation and out-of-context video content being recycled during crisis situations. Kyrylo Lesin from Ukraine’s Suspilne public service media explained how they face aggressive disinformation campaigns, particularly since Russia’s invasion, and view C2PA as crucial for helping audiences distinguish trustworthy content from other sources.


The discussion concluded with recognition that broader adoption requires platform support, improved media literacy, and continued development of security procedures and AI content labeling capabilities.


Keypoints

**Major Discussion Points:**


– **C2PA Technology Overview and Implementation**: Charlie Halford explained C2PA (Coalition for Content Provenance and Authenticity) as an open standard that uses cryptographic signatures to verify content authenticity and origin. The BBC has been piloting this technology, attaching verification data to content to help audiences distinguish genuine news from disinformation.


– **Global Disinformation Challenges**: Multiple speakers highlighted how media organizations worldwide face brand impersonation and content manipulation. Examples included fake BBC-branded content and recycled videos taken out of context during crises, particularly affecting outlets in Tanzania and Ukraine during wartime.


– **Media Literacy and User Trust Research**: The BBC conducted studies showing that when audiences were provided with C2PA provenance data, they demonstrated significantly higher trust levels in the content, especially among users who weren’t already familiar with the BBC brand.


– **Platform Adoption and AI Content Labeling**: Discussion covered how social media platforms like TikTok are beginning to integrate C2PA standards, particularly for detecting and labeling AI-generated content from tools like OpenAI, though broader adoption across platforms remains limited.


– **Barriers to Global Implementation**: Key challenges identified include the need for device-level integration, security procedures for private key management, platform cooperation, and extensive media literacy education to help users understand and utilize provenance information effectively.


**Overall Purpose:**


The discussion aimed to present C2PA as a promising solution for combating disinformation and building content authenticity, while gathering insights from international media partners about practical implementation challenges and needs in diverse global contexts.


**Overall Tone:**


The tone was consistently optimistic and collaborative throughout. Speakers maintained an educational and forward-looking approach, acknowledging current limitations while expressing confidence in the technology’s potential. The discussion emphasized partnership and collective action rather than dwelling on problems, with participants sharing practical experiences and research findings in a constructive manner.


Speakers

– **Muge Ozkaptan** – Senior Product and AI Lead at BBC Media Action, supports country offices and media organizations for digital transformation and AI adoption with focus on responsible and ethical approaches


– **Charlie Halford** – Principal Research Engineer at BBC R&D, works on C2PA technology implementation and content authenticity solutions


– **Khalifa Said Rashid** – Editor-in-Chief of the Chanzo, a digital media platform in Tanzania focusing on public interest journalism, public accountability and investigation


– **Kyrylo Lesin** – Senior Product Manager at Suspilne (public service media from Ukraine), works on digital transformation and journalism delivery


– **Audience** – Participant asking questions during the Q&A session


**Additional speakers:**


– **Amy Mitchell** – From Center for News Technology Innovation, researcher focusing on public service aspects of news technology


Full session report

# Comprehensive Discussion Report: C2PA Technology for Content Authenticity and Global Media Challenges


## Introduction and Context


This discussion centred on the Coalition for Content Provenance and Authenticity (C2PA), an open standard for content authenticity and provenance, presented by representatives from BBC Media Action and BBC R&D alongside international media partners. The session brought together diverse perspectives from global media organisations to examine how technological solutions can address the growing challenges of disinformation and content manipulation.


Muge Ozkaptan from BBC Media Action opened the session by establishing the organisation’s global reach and mission. BBC Media Action operates in 30 countries with content in 50 languages, focusing particularly on supporting media organisations in fragile environments. Ozkaptan emphasised the importance of bringing diverse voices into technology discussions, noting that “when we talk about technology generally, we talk about specification and applications, but it’s important that bringing those diverse voices and understand their actual needs, how they work, what kind of challenges that they are facing in their day-to-day life and work, and how C2PA innovation solutions like C2PA can fit in that area.”


## Technical Overview of C2PA Technology


Charlie Halford from BBC R&D provided a comprehensive explanation of C2PA technology and its implementation. As Halford explained, “C2PA itself is a standard, a technical standard. And what it does is it describes how you attach a signature, a cryptographic signature, the same kind that you might use on a website to give it that green lock.” The technology addresses the growing problem of disinformation by attaching verification data to content, allowing users to confirm the origin and authenticity of media they encounter.


Halford demonstrated the practical challenges facing media organisations by showing examples of fake BBC content. He explained that sophisticated artificial intelligence isn’t always necessary for effective disinformation: “These aren’t pieces of AI disinformation. This is just somebody with a video editor. They found the BBC logo. They found the BBC font. They know what the BBC’s graphics look like. And they’ve put out what the footage underneath them isn’t fake. They’ve just changed the message.” This observation highlighted how simple brand impersonation can be highly effective in misleading audiences who trust established media brands.


The BBC has conducted research into C2PA implementation, working with partners including IPTC for publisher certificates. The technology currently works with existing software and tools, including various cameras and content creation applications. Halford also explained the concept of “redaction” within C2PA systems, which allows for the removal of sensitive information like location and time data that could endanger subjects or photographers while maintaining content authenticity verification.


## Global Perspectives on Disinformation Challenges


### Tanzania: Brand Impersonation and Crisis Communication


Khalifa Said Rashid, Editor-in-Chief of the Chanzo, a digital media platform in Tanzania focusing on public interest journalism and accountability, provided crucial insights via recorded audio about the challenges facing media organisations in developing countries. The Chanzo struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content regularly.


Rashid explained the particular vulnerability that brand trust creates: “And it have been very difficult for us to deal with a situation like that because many people trust our brand and when they see content online with our logos and brand colours, they can be very difficult for average reader to tell whether it’s real or not.” This perspective illustrated how established media brands become targets for manipulation precisely because of the trust they have built with their audiences.


### Ukraine: Wartime Disinformation and Hybrid Warfare


Kyrylo Lesin, Senior Product Manager at Suspilne, Ukraine’s public service media, brought a unique perspective shaped by operating under wartime conditions. Suspilne, established eight years ago and recognised by independent watchdog organisations for delivering trustworthy journalism, faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since Russia’s invasion.


Lesin highlighted how disinformation campaigns affect content distribution systems: “For example, as Google discover all of these products, they operate, at some extent, as black boxes, and there are really lack of signals and parameters they can embrace to arrange the content with the most value for the end user.” This observation introduced an important dimension to the C2PA discussion—the potential for authenticity signals to influence algorithmic content distribution, helping platforms prioritise trustworthy content over manipulated material.


## Research Findings and Platform Implementation


The BBC’s research into user response to C2PA technology has involved multiple studies with different methodologies. Charlie Halford presented findings showing that users respond positively to additional transparency data, with research indicating that around 80% of users found extra data more useful, even without recognising the C2PA brand specifically. This finding was particularly significant because it suggested that the mere presence of verification information builds trust, regardless of technical literacy or brand recognition.


A separate study conducted by the University of Bergen expanded on the BBC’s research, providing additional validation of user interest in content authenticity features. However, as Amy Mitchell from the Center for News Technology Innovation pointed out, important questions remain about the distinction between user interest in authenticity features versus actual behavioural change in content consumption patterns.


Regarding platform adoption, Halford reported mixed progress. Social media platforms show positive but limited response, primarily adopting C2PA for AI-generated content labelling rather than general content verification. TikTok, for example, has begun integrating C2PA standards, particularly for detecting and labelling AI-generated content from tools like OpenAI and Microsoft, though broader adoption across platforms remains limited.


Major technology companies including Adobe, Microsoft, Google, Meta, and OpenAI are part of the C2PA coalition, and the technology works with existing software and cameras currently available. However, broader implementation faces several challenges, including the need for device-level integration, security procedures for private key management, platform cooperation, and extensive media literacy education.


## Implementation Challenges and Media Literacy


Despite the consensus on C2PA’s value, speakers identified several significant barriers to global implementation. Media literacy education emerged as a crucial requirement, with Charlie Halford noting that “you can’t just put this information in front of people and expect them to understand it so we have to use our products we have to use our journalism to explain to people what this means.”


The discussion revealed that while users respond positively to transparency data, C2PA as a brand lacks public recognition. This creates a challenge for implementation, as the technology’s effectiveness depends partly on user understanding and trust in the verification system itself.


Technical implementation challenges include the need for broader device and tool integration to make C2PA automatic rather than requiring special procedures. Media organisations also need to develop robust security procedures for managing the private keys required for C2PA implementation, ensuring the integrity and trustworthiness of the system.


## Audience Engagement and Future Development


The session included interactive elements, with audience participation facilitated through Slido Q&A system and QR codes for real-time questions. Participants raised important questions about regulatory integration, asking about plans for integrating C2PA into existing regulations for information integrity such as the EU Digital Services Act or UK Online Safety Act.


The discussion concluded with several concrete action items and future development plans. BBC Media Action committed to continuing support for global media organisations through workshops and conversations to include diverse voices in C2PA development. A pilot implementation is planned between BBC and Suspilne to integrate C2PA into end-to-end web publishing processes, providing a practical test case for the technology in a challenging operational environment.


## Ongoing Challenges and Considerations


Several significant issues remain unresolved and require continued attention. Limited social media platform adoption beyond AI content labelling represents a major challenge, as platforms show mixed response to general content verification features. The lack of public recognition of the C2PA brand itself requires significant media literacy education efforts to achieve meaningful adoption.


The challenge of scaling adoption across diverse media environments with varying technical capabilities remains substantial. Implementation needs to account for different levels of technical infrastructure and resources available to media organisations in different regions.


There are also ongoing questions about achieving broader device and tool adoption so that C2PA becomes built into cameras and content creation tools by default, making the technology seamless rather than requiring special technical knowledge from users. Additionally, the need for better AI-generated content detection and improved reliability of AI labelling in C2PA was acknowledged, as current AI detection methods are not completely reliable.


## Conclusion


The discussion demonstrated strong consensus among diverse stakeholders about the value of C2PA technology for addressing global challenges in content authenticity and disinformation. The perspectives from media organisations operating in different contexts—from the BBC’s established presence to the Chanzo’s work in Tanzania to Suspilne’s wartime operations—illustrated both universal challenges and context-specific needs.


The conversation successfully balanced technical capabilities with practical implementation concerns, emphasising that successful C2PA adoption requires not just technical standards but also media literacy education, platform cooperation, and understanding of diverse global media environments. The planned pilot implementations and continued research efforts indicate positive momentum towards broader adoption of content authenticity standards in the global media landscape.


Session transcript

Muge Ozkaptan: Let me see who is here. Hello everyone, I’m Muge Ozkaptan, Senior Product and AI Lead at BBC Media Action. And I support our country offices and media organisations for their digital transformation and AI adoption, especially for responsible and ethical point of view. We also support the innovation solutions including C2PA, being sure that it’s scalable, practical and impactful. So I’d like to introduce my colleagues from the BBC, Charlie Halford, which is Principal Research Engineer at the BBC R&D, and we have Krylo Iesin, he’s a Senior Product Manager, and we also have Khalifa Said Rashid, who is Editor-in-Chief from the Chanso, the Chanso is a digital outlet in Tanzania. He couldn’t come here today, but he’s attending through a recorded audio, and we will hear from him about his thoughts about C2PA. I’d like to talk about BBC Media Action a little bit, and then BBC Media Action’s approach on C2PA, and then I will hand over to Charlie, and then Charlie will talk about what is C2PA in detail and in action, and how the BBC is using it. And then we will hear from our media partners, Suspine and the Chanso, about their reflections and needs around C2PA. We will have some Q&A at the end, but if you’d like to join online, we have Slido, so you can see the QR code on the screen, and also you can type 3710912 for your questions, or you can ask it directly here. So BBC Media Action is BBC’s international charity, we work in 30 countries, and co-create content in 50 languages. We are fully funded by donors and our supporters. We are a front line of the global challenges like disinformation, and also declassification and violating public trust. We support media organisations, media professionals to enhance their abilities, make them more strengthened, resilient for fragile environments. And we believe that C2PA is a very crucial, very important development in the open standards, and we are really interested in being part of these global conversations from now on. And C2PA is an open standard for content authenticity and prominence, offers one promising approach to help audiences verify where the content comes from and how or where it’s altered. We believe that including the voices from the global majority to make these standards more applicable and relevant to the global audience. When we talk about technology generally, we talk about specification and applications, but it’s important that bringing those diverse voices and understand their actual needs, how they work, what kind of challenges that they are facing in their day-to-day life and work, and how C2PA innovation solutions like C2PA can fit in that area. So this is where we are focusing on. And at BBC Media Action, we launched an initiative called Pursuit of Truth. We are supporting a cohort of 30,000 media professionals and 1,000 vital media outlets, especially work and serve audiences in fragile environments. And as part of this commitment, we want to provide tools, technology, and innovation solutions for them to gather the facts and deal with the external pressures and give a platform to diverse voices. And C2PA is sitting perfectly in that branch, including other open standards in that field. And also draw on world-class expertise and innovation to advance the ethical use of AI and content verification in media around the world. And we are also providing a big commitment to supporting research to understand how this information spreads and how to respond it more effectively. So I want to hand over to Charlie. So Charlie, what is C2PA in action and how the BBC is using it?


Charlie Halford: Thank you very much. Hello, everybody. Yes, I’m Charlie, as Miguel has let you all know by now. So yeah, I’m just going to take you through what C2PA is, how we’re using it at the BBC, and how we’d love to see C2PA adopted, I guess, around the world and across the media ecosystem. And some of the challenges that we see in that area and how maybe we can all work together to make it work. So let’s first start with part of our problem. So these are three examples of disinformation that have had the BBC logo attached to them. These aren’t pieces of AI disinformation. This is just somebody with a video editor. They found the BBC logo. They found the BBC font. They know what the BBC’s graphics look like. And they’ve put out what the footage underneath them isn’t fake. They’ve just changed the message. They’ve changed the message. They’ve put something on there that the BBC hasn’t published. And so this kind of problem is the one that we really wanted to address with C2PA. So when the origin of content is really hidden, maybe on a social media platform, all content can look the same. You might think all content is equally trustworthy. So in this world of disinformation, as a group of media organizations and other people across the world, how can our commitment to accuracy make a difference? So that was one of the problems that we tried to think about when we were looking at creating a new technology in BBC research and development. So what are some of the others? So answering this question, is the media I see genuine or authentic is the general thing we’re trying to solve. So we know that there’s no real way to securely understand what or who created a piece of content. The existing metadata that we have, which can be really useful, is very easily faked. Anybody can add to it. Anybody can manipulate it. The media itself, as we just saw, is very easily manipulated, and there’s no guarantee that it’s original. And there’s no clear way, if you wanted to see it, to understand the journey of that content. What’s happened to it when it’s left the camera? Who’s changed it? Who’s added what? Who’s modified what? And so C2PA was really created to address some of those problems. These are some of the people that were involved, and it’s really grown to be a pretty huge coalition at this point. So you can see along the top there, I’ve included some of the tech companies and product organizations in there. So Adobe are a huge driver in this, but so are Microsoft, Google, Meta, OpenAI, Sony, all part of the C2PA board. And then I’ve added some of the news organizations that are getting involved, because this problem is one that we’re really trying to solve. So WDR, BBC, CBC, Radio Canada, AFP, France TV. Many people are involved in this process. There’s many more on that list that I’ve not added. So I just wanted to do a…before I get into this in detail, understand how many people in our audience here are sort of technical, have a technical background? Yeah, got a few people there. So C2PA itself is a standard, a technical standard. And what it does is it describes how you attach a signature, a cryptographic signature, the same kind that you might use on a website to give it that green lock. How you attach that cryptographic signature to a piece of content. So it goes inside a file and binds to the image or the video or the audio to let you understand where that piece of content has come from. So we use a hash to link it to the video and the audio, and we then use a signature across that. So when we started working on this, we tried to understand, we wanted to ask our users if this kind of thing makes sense. makes an actual difference. And so we’ve run a few studies. There’s recently been one run, I think, by the University of Bergen that’s expanded on this. But when we did it, we asked people, we gave them two sets of content, the same kind of content, and then one had no provenance on it, the other one had this C2PA data on it, and we said, do you trust, what’s your level of trust in each of these pieces of content? And the significant one there was that when we added extra transparency data, when we told people where this stuff had come from, they were more inclined to trust that content. And the important thing for other media organizations here is it was the people that didn’t use our website already that were the most affected by that. And so we then ran a trial. So this is a piece of content. We ran, that came into our verification team, BBC Verify. They then did manual verification checks. And what we wanted to do is take the output of their verification checks and add it into the content, so attach it to the content, and then we showed that to our audiences. We did that with about five pieces of content as our trial. Gives you much more detail when you click that blue button to expand it. And we did the same thing. We asked people if they would find it more trustworthy, and they came back about, I think it was about 80% of the people said they found that piece of content, that extra data more useful, and it added more trust to the story. And then what we’ve also been doing is working with an organization called the IPTC to establish a way for publishers to get a certificate that proves who they are, so that people can’t impersonate you. So the BBC or AFP, in this example, gets a certificate from GlobalSign. They send it to the IPTC, and then we add that to a list of not trusted organizations. It’s actually just verified organizations, organizations whose identity has been verified. So if you wanted to know how can you use it now, all of these pieces of software, and in some cases cameras, are available right now to make use of it. So if any of these are in use by you today, I’d encourage you to go and check them out. More are being developed. And with that, who are we handing to first?


Muge Ozkaptan: Yeah. We’ve been close to working with the BBC since last year, and we included the diverse voices from our partners to the workshops and the global conversations. And we want to show you one of the talks from the Chanzo Editor-in-Chief Khalifa Said Rashid is sharing with us about his challenges in Tanzania around mis-disinformation and how the C2PA is relevant to his work.


Khalifa Said Rashid: Hello. My name is Khalifa Said Rashid. I am the Editor-in-Chief of the Chanzo. The Chanzo is a digital media platform based here in Jerusalem, Tanzania, focusing on public interest journalism and public accountability and investigation. The major problem that we face here in Tanzania with regard to misinformation and disinformation includes but not limited to impersonification of brands, a phenomenon that have affected many media outlets here in Tanzania, including the Chanzo, where we have been forced on numerous accounts to come to public and deny the content that have been shared on social media platform, impersonating our brands. And it have been very difficult for us to deal with a situation like that because many people trust our brand and when they see content online with our logos and brand colors, they can be very difficult for average reader to tell whether it’s real or not. But another types of disinformation or misinformation is we have seen, especially during the time of crisis, for example, all the video taken out of context resurfaced on social media, purported to be about the events that are happening during that week or day. And we have been battling with these problems where you have multiple media outlets in Tanzania, which have produced a number of content, it may be two or three years ago, but they are not dated. And when, if for example, they are related to demonstrations or protest, and if there is a protest on that day, this video resurfaces on social media, purporting it to be happening on that day. And so in this context, we are very optimistic that a technology like C2PA offers such a huge potential for us as editors and journalists to counter misinformation and disinformation because it allows the user to tell if the content is real or not because the technology allows media content in partnership with the platforms like Twitter, and now Facebook and others to sign their content that allows users to tell that this is really from the chance or this is really from this particular media outlet and not an impersonation. And of course, we are also happy to work with the BBC Media Action, which is helping us better understand this technology and apply it in our day-to-day operations. Thank you, but goodbye.


Muge Ozkaptan: Well, thank you, Khalifa. I want to turn to you, Krylo. And can you talk about Suspilne a little bit? Who is Suspilne and what do you do? And what are your challenges? And then what do you think about C2PA in your day-to-day work? Why is it relevant to you?


Kyrylo Lesin: Yeah, thank you, Mirko. Hello, everyone. So I represent Suspilne. This is public service media from Ukraine. It was established as an independent media institution eight years ago. And since that time, specifically five years ago, we started intense digital transformation. And there are a lot of outputs of this process, like organizational, content-wise, and BBC Media Action and other partners invest a lot of their time and resources to support us on this journey. And specifically, five years ago, our flagship digital news platform was launched. It is called Suspilne.media, and you can access it through any browser. And using this platform and our other digital channels, we deliver high-quality journalism. What does it mean? So we recognize our main mission is to empower citizens’ decision-making by providing them with high-quality journalism. And our output is recognized by independent watchdog organization as one of the most trustworthy journalistic products, meaning that we totally adhere to journalistic standards. And operating in Ukrainian media context and global context, we encountered, obviously, really aggressive competition for the attention of our audiences. And specifically, now we’re countering hybrid warfare and disinformation that has been identified severely since the full-scale Russia invasion to Ukraine. Also, it’s affected the operational conditions, both for audience and for media. For example, only this night, Russia launched more than 370 drones and missiles in total. So also, another take of media sphere that get us tackling these challenges is the rise of AI-supporting, AI-powered systems. Also, algorithmic-based newsfeeds. For example, as Google discover all of these products, they operate, at some extent, as black boxes, and there are really lack of signals and parameters they can embrace to arrange the content with the most value for the end user. And talking about C2P technology and the pilot we would like to run with the BBC and the process that now leading by Charlie is to get this technology incorporated into the end-to-end process of web publishing so we can provide our audience with at least one additional mean to draw a distinctive line between the trustworthy content and kind of other content. So the value is huge, and it might sound boring when it comes to C2P. recognized as some standard but in general for APBC we recognize it as the innovation vibe so some piece of code can just dramatically change the way content appears into the screens of our end users and they can end up with a change in their behavior recognizing the high quality content and you know put the preferences on that compared to to some other resources so yeah this is a value.


Muge Ozkaptan: Thank you so much and Charlie I want to ask you where do you see this technology is going next and how actually we can make it broader adoption especially by the global majority


Charlie Halford: okay hopefully where we will see the technology going next is expansion in where it’s being used so we’re really hoping that we’re going to see more media organizations using it we’re really hoping to be to be able to put a pilot in place with the Spillner that would be fantastic there’s lots of other people that are involved we’d also love to see more support from platforms social media platforms I think most media organizations get a lot of their traffic of the lots of their audiences come via me social media platforms that might be tick-tock it might be YouTube it might be Facebook lots of them and then there’s also a few other considerations I guess to help us get that broader adoption so there’s a concept called redaction in C2PA so that’s the idea that you want to show people as much information as possible right to help them make a decision about is this trustworthy but sometimes that information can hurt people maybe it could hurt the subject of the photo maybe it could hurt the the person taking it so location and date and time so having the ability to remove those things where somebody might be put at risk is really important that’s redaction so we need to see that implemented we’ve got device and tool adoption so we can’t we need to get to a place where it’s possible for any organization or any person taking a picture with their camera that it’s just built in they don’t need to do anything special I think that’s starting to happen but we need that to roll out more we also need if you’re going to be part of this as a media organization you need to be able to look after your private key that’s the thing that’s going to be really important to you so developing security procedures we’ve talked about platforms pilots so I think really it’s about finding the right use case what’s the best the best thing that helps you out maybe it’s showing users on your platform more detail about your content maybe it’s telling people on social media that this is really from you from you another one is considering how content comes into your organization if lots of people send you images maybe images and videos from users being able to detect maybe whether they are genuine is really important so do that at that point and then media literacy is probably one of the the biggest ones on this list helping your users understand what all of this means you can’t just put this information in front of people and expect them to understand it so we have to use our products we have to use our journalism to explain to me people what what this means and thank you very much and how c2p works for AI generated content okay so on AI generated content a few organizations now so open AI and Microsoft are actually putting AI labels into their content and they’re using c2p a and so if you click through to the next slide I think if some social media companies are now using that so if you click through again this is an example of tick-tock and where they detect and a c2p a AI label they actually let users know and we’re hoping this will become more broadly adopted as we go forward if you click yet so that’s just the AI label and then I think this next one just a little video so if you click play on that so this is a prototype that we’re working on and so here I’ve added an AI image and I’m just in the background I’m inspecting the c2p a label and because it was produced by one of those tools you can see that that’s been AI generated they’re not bulletproof at the moment we still need to use other techniques to detect whether something’s AI generated but this is a good first start at that


Muge Ozkaptan: thank you very much we have some time for questions and please fire up and we like to hear from you your thoughts and any experiences any challenges that you’re facing and do you think c2p a is important to you so please go in and I’m also checking slider if anything comes up on online just a second I think there is a question from from there this is


Audience: great and I’m such a I’m a big supporter of the c2p a but I wanted to ask a couple questions on the public side of it in terms of the public response and recognition of it I’m Amy Mitchell with Center for news technology innovation and look a lot at sort of how do we think about really public service in these kinds of things and there can be value in internal kinds of signals that maybe aren’t meant for public facing purposes but in the space where you’re looking to really have the public benefit and understand the integrity in your research and tests have you seen them wreck it you know recognize the sig the sign the print the content label that’s on there and respond positively in choice or is it more at this point about interest in it you know interest in having something like that be a sign I’d be I’d be curious Charlie would


Charlie Halford: like yeah sure thank you that’s a yeah that’s a really good question so when we the research I showed there we we showed people without any any extra data and we showed them with it with the extra data we showed them a like a c2p a logo we didn’t get any comments back about any recognition yet so as a brand I don’t think the c2p a has much public recognition so I think in terms of media literacy that’s a job for us to do but in terms of us giving them the extra data that was a big trust indicator so that that had a direct impact not just on interest but on how trustworthy people found the content itself


Muge Ozkaptan: Thank you we have a question on slider is there any plan to integrate c2p a to existing regulations for information integrity such as EU DSA or online safety act of the UK


Charlie Halford: so I I guess there there’s it’s been looked at by many different organizations many regulatory bodies around the world I’m not sure if it has been named directly in any of them but there’s quite a lot of regulation that’s starting to come out that’s talking about the need to label things particularly from an AI perspective so you’re getting a lot more AI labeling requirements whether we would ever push to get c2p a as a technology embedded into legislation I’m not sure I it might be it might be useful to get some movement but then if there’s standard changes at a later date maybe we’d want some flexibility but the idea of provenance the idea of labeling I think would be really great.


Muge Ozkaptan: I think we have time for one more question if you want to ask any from the room I just want to ask last question about the social media platforms how they’re adopting so far and what’s the response in detail.


Charlie Halford: so I think the response has been mixed but positive so most of the adoption we see are in social media platforms using c2p a to understand whether something has been labeled as AI they’re most interested in that in that situation we’ve seen less adoption for people interested in labeling things as from the BBC from sys bilna or maybe showing you more detail about your media but we’re hopeful that the more content we see the more we publish the more social media organizations will start to adopt and really it’s it’s for us to request that of those platforms I think


Muge Ozkaptan: well thank you so much we are at the end of session now but if you’re interested investing into this standards and if you want to have questions or sharing ideas they’re just here and just come and then join to a conversation thank you very much thank you


C

Charlie Halford

Speech speed

155 words per minute

Speech length

2132 words

Speech time

823 seconds

C2PA is an open standard that uses cryptographic signatures to verify content authenticity and provenance, addressing problems where media can be easily manipulated or impersonated

Explanation

C2PA describes how to attach cryptographic signatures (similar to website security certificates) to content files, binding to images, videos, or audio to show their origin. This addresses the problem that existing metadata is easily faked and there’s no secure way to understand what or who created content.


Evidence

Examples of BBC logo being used in fake content with manipulated messages; demonstration that anyone with video editing software can impersonate media brands; involvement of major tech companies like Adobe, Microsoft, Google, Meta, OpenAI, Sony as part of C2PA board


Major discussion point

Content authenticity and verification technology


Topics

Digital standards | Content policy | Liability of intermediaries


Agreed with

– Khalifa Said Rashid
– Kyrylo Lesin

Agreed on

C2PA technology offers valuable solutions for content authenticity verification


BBC has conducted trials showing that content with C2PA provenance data increases user trust by about 80%, particularly among users who don’t regularly use BBC’s website

Explanation

BBC ran studies comparing user trust levels between content with and without C2PA provenance data, finding significant increases in trust when transparency data was added. They conducted trials with BBC Verify team’s verification checks attached to content, which users found more useful and trustworthy.


Evidence

User study results showing 80% of people found extra provenance data more useful and trustworthy; specific mention that people who didn’t use BBC website were most affected by the additional transparency


Major discussion point

User trust and content verification effectiveness


Topics

Content policy | Consumer protection | Digital identities


C2PA works with existing software and cameras that are available now, with major tech companies like Adobe, Microsoft, Google, Meta, and OpenAI being part of the coalition

Explanation

The technology is currently implementable through existing tools and devices, with widespread industry support from major technology companies. The coalition has grown significantly and includes both tech companies and news organizations working together on the standard.


Evidence

List of software and cameras currently supporting C2PA; mention of tech companies (Adobe, Microsoft, Google, Meta, OpenAI, Sony) and news organizations (WDR, BBC, CBC, Radio Canada, AFP, France TV) as coalition members


Major discussion point

Technology adoption and industry collaboration


Topics

Digital standards | Digital business models | Convergence and OTT


BBC faces impersonation problems where fake content uses BBC logos and branding, making it difficult for audiences to distinguish authentic content

Explanation

The BBC regularly encounters disinformation that uses their visual branding, logos, and fonts to create fake content that appears legitimate. This creates confusion for audiences who cannot easily distinguish between authentic BBC content and impersonated content on social media platforms.


Evidence

Three specific examples of disinformation with BBC logos attached; explanation that these weren’t AI-generated but created with basic video editing tools using BBC branding elements


Major discussion point

Brand impersonation and media authenticity challenges


Topics

Content policy | Intellectual property rights | Liability of intermediaries


Agreed with

– Khalifa Said Rashid

Agreed on

Brand impersonation is a major challenge for media organizations


Broader adoption requires expansion to more media organizations, increased platform support, device integration, and better security procedures for managing private keys

Explanation

For C2PA to be effective globally, it needs wider implementation across media organizations, better support from social media platforms, built-in camera/device integration, and robust security procedures. The technology also needs concepts like redaction to protect sensitive information while maintaining transparency.


Evidence

Mention of redaction concept for protecting location/date/time data that could hurt subjects; need for device integration so cameras automatically include C2PA without special procedures; importance of private key security management


Major discussion point

Technology scaling and implementation challenges


Topics

Digital standards | Network security | Privacy and data protection


Media literacy education is crucial for helping users understand what C2PA information means, as the technology itself doesn’t have much public recognition yet

Explanation

While users respond positively to additional transparency data, they don’t yet recognize the C2PA brand or understand what the information means. Media organizations need to use their platforms and journalism to educate users about content provenance and verification.


Evidence

Research showing users didn’t recognize C2PA logo but responded positively to extra transparency data; acknowledgment that C2PA as a brand has little public recognition


Major discussion point

Public education and technology literacy


Topics

Online education | Content policy | Multilingualism


Social media platforms show mixed but positive response, primarily adopting C2PA for AI-generated content labeling rather than general content verification

Explanation

Social media platforms are beginning to implement C2PA technology, but mainly focus on detecting and labeling AI-generated content rather than broader content verification. There’s less adoption for showing detailed media provenance or publisher verification.


Evidence

Examples of TikTok detecting C2PA AI labels and notifying users; mention of OpenAI and Microsoft putting AI labels into content using C2PA; demonstration of prototype detecting AI-generated images


Major discussion point

Platform adoption and AI content labeling


Topics

Content policy | Digital standards | Liability of intermediaries


Disagreed with

– Kyrylo Lesin

Disagreed on

Platform adoption priorities and effectiveness


K

Khalifa Said Rashid

Speech speed

130 words per minute

Speech length

387 words

Speech time

178 seconds

The Chanzo in Tanzania struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content

Explanation

The Chanzo faces two main disinformation challenges: impersonation using their logos and brand colors, and old video content being taken out of context and reshared during current events. These problems are particularly difficult because readers trust their brand, making fake content appear credible.


Evidence

Specific examples of having to publicly deny impersonated content; description of old protest/demonstration videos resurfacing during current events without proper dating; explanation of how trusted brand recognition makes fake content more believable


Major discussion point

Regional media challenges with disinformation


Topics

Content policy | Intellectual property rights | Freedom of the press


Agreed with

– Charlie Halford
– Kyrylo Lesin

Agreed on

C2PA technology offers valuable solutions for content authenticity verification


K

Kyrylo Lesin

Speech speed

123 words per minute

Speech length

441 words

Speech time

214 seconds

Suspilne in Ukraine faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since the Russian invasion

Explanation

Suspilne operates in an environment of hybrid warfare where disinformation is used as a weapon, with conditions severely affected since Russia’s full-scale invasion. They face aggressive competition for audience attention while trying to maintain journalistic standards and provide trustworthy content.


Evidence

Mention of 370+ drones and missiles launched in a single night; description of hybrid warfare and disinformation intensifying since Russian invasion; recognition by independent watchdog organizations as trustworthy journalism


Major discussion point

Media operations during wartime and hybrid warfare


Topics

Cyberconflict and warfare | Content policy | Freedom of the press


Agreed with

– Charlie Halford
– Khalifa Said Rashid

Agreed on

C2PA technology offers valuable solutions for content authenticity verification


Disagreed with

– Charlie Halford

Disagreed on

Platform adoption priorities and effectiveness


Suspilne is Ukraine’s public service media established eight years ago, focusing on digital transformation and delivering trustworthy journalism recognized by independent watchdog organizations

Explanation

Suspilne was established as an independent media institution that has undergone significant digital transformation, launching their flagship digital platform Suspilne.media. Their mission is to empower citizens’ decision-making through high-quality journalism that adheres to professional standards.


Evidence

Specific timeline of 8 years since establishment and 5 years of digital transformation; launch of Suspilne.media platform; recognition by independent watchdog organizations as trustworthy journalism; support from BBC Media Action and other partners


Major discussion point

Public service media digital transformation


Topics

Digital business models | Online education | Freedom of the press


Agreed with

– Muge Ozkaptan

Agreed on

Media organizations need support for digital transformation and capacity building


M

Muge Ozkaptan

Speech speed

131 words per minute

Speech length

952 words

Speech time

435 seconds

BBC Media Action works in 30 countries with content in 50 languages, focusing on supporting media organizations in fragile environments through their ‘Pursuit of Truth’ initiative

Explanation

BBC Media Action is BBC’s international charity that operates globally to support media organizations and professionals in challenging environments. They focus on enhancing capabilities and resilience of media organizations facing disinformation and threats to public trust.


Evidence

Specific numbers: 30 countries, 50 languages, fully funded by donors; description of working on frontline of global challenges like disinformation; focus on fragile environments and global majority voices


Major discussion point

International media development and support


Topics

Capacity development | Cultural diversity | Freedom of the press


The organization supports 30,000 media professionals and 1,000 media outlets, providing tools and technology to deal with external pressures and gather facts

Explanation

Through the Pursuit of Truth initiative, BBC Media Action provides comprehensive support including tools, technology, and innovation solutions to help media professionals work effectively under pressure. They aim to advance ethical AI use and content verification while supporting research on disinformation.


Evidence

Specific numbers: 30,000 media professionals and 1,000 media outlets; mention of providing tools, technology, and innovation solutions; commitment to supporting research on how disinformation spreads


Major discussion point

Media capacity building and technology support


Topics

Capacity development | Digital access | Online education


Agreed with

– Kyrylo Lesin

Agreed on

Media organizations need support for digital transformation and capacity building


A

Audience

Speech speed

160 words per minute

Speech length

161 words

Speech time

60 seconds

Research shows public interest in content integrity signals, with users responding positively to additional transparency data even without recognizing the C2PA brand specifically

Explanation

Questions were raised about public recognition and response to C2PA technology, specifically whether users recognize the content labels and respond positively in their choices or if it’s more about general interest in integrity signals. The focus is on understanding the public benefit and service aspect of the technology.


Evidence

Reference to research by Center for News Technology Innovation; distinction between internal signals versus public-facing purposes; question about recognition of signatures and positive response in choice behavior


Major discussion point

Public awareness and response to content verification technology


Topics

Consumer protection | Online education | Content policy


Agreements

Agreement points

Brand impersonation is a major challenge for media organizations

Speakers

– Charlie Halford
– Khalifa Said Rashid

Arguments

BBC faces impersonation problems where fake content uses BBC logos and branding, making it difficult for audiences to distinguish authentic content


The Chanzo in Tanzania struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content


Summary

Both BBC and The Chanzo face significant challenges with their brands being impersonated through fake content using their logos and visual branding, creating confusion for audiences who trust these brands


Topics

Content policy | Intellectual property rights | Liability of intermediaries


C2PA technology offers valuable solutions for content authenticity verification

Speakers

– Charlie Halford
– Khalifa Said Rashid
– Kyrylo Lesin

Arguments

C2PA is an open standard that uses cryptographic signatures to verify content authenticity and provenance, addressing problems where media can be easily manipulated or impersonated


The Chanzo in Tanzania struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content


Suspilne in Ukraine faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since the Russian invasion


Summary

All speakers recognize C2PA as a promising technology solution that can help media organizations verify content authenticity and combat disinformation challenges they face in their respective contexts


Topics

Digital standards | Content policy | Cyberconflict and warfare


Media organizations need support for digital transformation and capacity building

Speakers

– Muge Ozkaptan
– Kyrylo Lesin

Arguments

The organization supports 30,000 media professionals and 1,000 media outlets, providing tools and technology to deal with external pressures and gather facts


Suspilne is Ukraine’s public service media established eight years ago, focusing on digital transformation and delivering trustworthy journalism recognized by independent watchdog organizations


Summary

Both speakers emphasize the importance of supporting media organizations through digital transformation initiatives, providing tools and technology to enhance their capabilities


Topics

Capacity development | Digital business models | Online education


Similar viewpoints

Both speakers emphasize the critical importance of education and capacity building – Charlie focuses on media literacy for users to understand C2PA technology, while Muge focuses on supporting media organizations globally with tools and knowledge

Speakers

– Charlie Halford
– Muge Ozkaptan

Arguments

Media literacy education is crucial for helping users understand what C2PA information means, as the technology itself doesn’t have much public recognition yet


BBC Media Action works in 30 countries with content in 50 languages, focusing on supporting media organizations in fragile environments through their ‘Pursuit of Truth’ initiative


Topics

Online education | Content policy | Capacity development


Both media organizations operate in challenging environments where they face sophisticated disinformation campaigns that threaten their credibility and require active countermeasures

Speakers

– Khalifa Said Rashid
– Kyrylo Lesin

Arguments

The Chanzo in Tanzania struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content


Suspilne in Ukraine faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since the Russian invasion


Topics

Content policy | Freedom of the press | Cyberconflict and warfare


Unexpected consensus

User trust increases with transparency even without brand recognition

Speakers

– Charlie Halford
– Audience

Arguments

BBC has conducted trials showing that content with C2PA provenance data increases user trust by about 80%, particularly among users who don’t regularly use BBC’s website


Research shows public interest in content integrity signals, with users responding positively to additional transparency data even without recognizing the C2PA brand specifically


Explanation

It’s somewhat unexpected that users would respond so positively to technical transparency data (C2PA provenance information) even when they don’t recognize or understand the specific technology brand. This suggests that the mere presence of additional verification information builds trust, regardless of technical literacy


Topics

Consumer protection | Online education | Content policy


Global media challenges are remarkably similar across different contexts

Speakers

– Charlie Halford
– Khalifa Said Rashid
– Kyrylo Lesin

Arguments

BBC faces impersonation problems where fake content uses BBC logos and branding, making it difficult for audiences to distinguish authentic content


The Chanzo in Tanzania struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content


Suspilne in Ukraine faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since the Russian invasion


Explanation

Despite operating in vastly different contexts (UK public broadcaster, Tanzanian digital outlet, Ukrainian public media during wartime), all three organizations face remarkably similar challenges with brand impersonation and content manipulation, suggesting these are universal problems in the digital media landscape


Topics

Content policy | Freedom of the press | Digital standards


Overall assessment

Summary

There is strong consensus among all speakers that content authenticity and brand impersonation are critical challenges facing media organizations globally, and that C2PA technology offers a promising solution. All speakers agree on the need for capacity building, education, and technological solutions to combat disinformation.


Consensus level

High level of consensus with no significant disagreements identified. The implications are positive for C2PA adoption, as there appears to be unified support from diverse stakeholders (technology developers, international media development organizations, and media outlets from different regions). This consensus suggests strong potential for collaborative implementation and scaling of the technology across different contexts and regions.


Differences

Different viewpoints

Platform adoption priorities and effectiveness

Speakers

– Charlie Halford
– Kyrylo Lesin

Arguments

Social media platforms show mixed but positive response, primarily adopting C2PA for AI-generated content labeling rather than general content verification


Suspilne in Ukraine faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since the Russian invasion


Summary

Charlie presents a measured view of platform adoption focusing on AI content labeling, while Kyrylo emphasizes the urgent need for broader content verification tools due to wartime disinformation challenges. Their perspectives differ on the adequacy of current platform responses.


Topics

Content policy | Liability of intermediaries | Cyberconflict and warfare


Unexpected differences

Public readiness versus technology deployment

Speakers

– Charlie Halford
– Audience

Arguments

Media literacy education is crucial for helping users understand what C2PA information means, as the technology itself doesn’t have much public recognition yet


Research shows public interest in content integrity signals, with users responding positively to additional transparency data even without recognizing the C2PA brand specifically


Explanation

While both acknowledge positive user response to transparency data, there’s an unexpected tension between Charlie’s emphasis on the need for extensive media literacy education and the audience member’s research suggesting users already respond positively without brand recognition. This reveals disagreement about whether public education should precede or accompany technology deployment.


Topics

Online education | Consumer protection | Content policy


Overall assessment

Summary

The discussion shows minimal direct disagreement, with most differences stemming from varying operational contexts rather than fundamental philosophical disputes about C2PA technology


Disagreement level

Low level of disagreement with high consensus on C2PA’s value. The main tensions relate to implementation priorities, urgency levels, and sequencing of education versus deployment. This suggests strong foundational agreement that should facilitate collaborative implementation, though coordination may be needed to address different regional and operational priorities.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the critical importance of education and capacity building – Charlie focuses on media literacy for users to understand C2PA technology, while Muge focuses on supporting media organizations globally with tools and knowledge

Speakers

– Charlie Halford
– Muge Ozkaptan

Arguments

Media literacy education is crucial for helping users understand what C2PA information means, as the technology itself doesn’t have much public recognition yet


BBC Media Action works in 30 countries with content in 50 languages, focusing on supporting media organizations in fragile environments through their ‘Pursuit of Truth’ initiative


Topics

Online education | Content policy | Capacity development


Both media organizations operate in challenging environments where they face sophisticated disinformation campaigns that threaten their credibility and require active countermeasures

Speakers

– Khalifa Said Rashid
– Kyrylo Lesin

Arguments

The Chanzo in Tanzania struggles with brand impersonation and out-of-context video content being reshared during crisis situations, forcing them to publicly deny fake content


Suspilne in Ukraine faces aggressive disinformation campaigns as part of hybrid warfare, particularly intensified since the Russian invasion


Topics

Content policy | Freedom of the press | Cyberconflict and warfare


Takeaways

Key takeaways

C2PA is a promising open standard for content authenticity that uses cryptographic signatures to verify media provenance and combat disinformation


Research demonstrates that C2PA significantly increases user trust in content, with 80% of users finding provenance data more useful and trustworthy


Media organizations globally face similar challenges with brand impersonation and content manipulation, from BBC’s logo misuse to Tanzania’s Chanzo dealing with fake branded content


The technology is currently available and being implemented by major tech companies, but broader adoption requires coordinated effort across platforms, devices, and organizations


Media literacy education is crucial for public understanding and adoption, as users respond positively to transparency data even without recognizing the C2PA brand


C2PA shows particular promise for AI-generated content labeling, with platforms like TikTok already implementing detection and labeling systems


Resolutions and action items

BBC Media Action will continue supporting global media organizations through workshops and conversations to include diverse voices in C2PA development


A pilot implementation is planned between BBC and Suspilne to integrate C2PA into end-to-end web publishing processes


Media organizations need to develop security procedures for managing private keys required for C2PA implementation


Continued research and user studies are needed to understand public response and optimize implementation strategies


Unresolved issues

Limited social media platform adoption beyond AI content labeling – platforms show mixed response to general content verification features


Lack of public recognition of the C2PA brand itself, requiring significant media literacy education efforts


Need for broader device and tool integration to make C2PA automatic rather than requiring special procedures


Implementation of redaction capabilities to protect sensitive information while maintaining transparency


Uncertainty about regulatory integration with existing information integrity laws like EU DSA or UK Online Safety Act


Challenge of scaling adoption across the global majority and diverse media environments with varying technical capabilities


Suggested compromises

Flexible approach to regulatory integration that allows for standard changes while promoting provenance labeling requirements


Gradual implementation starting with specific use cases (like AI content labeling) before expanding to general content verification


Balancing transparency with safety through redaction capabilities that can hide sensitive location, date, or personal information when needed


Thought provoking comments

When we talk about technology generally, we talk about specification and applications, but it’s important that bringing those diverse voices and understand their actual needs, how they work, what kind of challenges that they are facing in their day-to-day life and work, and how C2PA innovation solutions like C2PA can fit in that area.

Speaker

Muge Ozkaptan


Reason

This comment is insightful because it highlights a critical gap in technology development – the tendency to focus on technical specifications without adequately considering the real-world needs of diverse global users. It challenges the typical tech-centric approach and emphasizes the importance of inclusive design.


Impact

This comment set the foundational framework for the entire discussion, establishing that the session would prioritize voices from the global majority rather than just technical implementation. It directly led to featuring perspectives from Tanzania and Ukraine, demonstrating practical challenges in different contexts.


These aren’t pieces of AI disinformation. This is just somebody with a video editor. They found the BBC logo. They found the BBC font. They know what the BBC’s graphics look like. And they’ve put out what the footage underneath them isn’t fake. They’ve just changed the message.

Speaker

Charlie Halford


Reason

This observation is thought-provoking because it reframes the disinformation problem beyond AI-generated content to include simple brand impersonation. It demonstrates that sophisticated AI isn’t always necessary for effective disinformation, making the problem more accessible and widespread.


Impact

This comment shifted the discussion from focusing solely on AI-generated content to broader authenticity challenges. It provided concrete context that resonated with the media partners’ experiences, particularly Khalifa’s later description of brand impersonation issues in Tanzania.


And it have been very difficult for us to deal with a situation like that because many people trust our brand and when they see content online with our logos and brand colors, they can be very difficult for average reader to tell whether it’s real or not.

Speaker

Khalifa Said Rashid


Reason

This comment is particularly insightful because it illustrates how brand trust, typically an asset, becomes a vulnerability in the disinformation landscape. It shows the real-world impact on media organizations in developing countries where resources for combating impersonation may be limited.


Impact

This comment provided crucial validation for the C2PA initiative by demonstrating actual harm experienced by media organizations. It moved the discussion from theoretical benefits to concrete use cases, strengthening the argument for C2PA adoption.


For example, as Google discover all of these products, they operate, at some extent, as black boxes, and there are really lack of signals and parameters they can embrace to arrange the content with the most value for the end user.

Speaker

Kyrylo Lesin


Reason

This comment is thought-provoking because it identifies a systemic problem with algorithmic content distribution – the lack of quality signals that algorithms can use to prioritize trustworthy content. It suggests that C2PA could serve as a quality signal in algorithmic systems.


Impact

This comment expanded the discussion beyond direct user verification to consider how C2PA could influence content distribution algorithms. It introduced a new dimension of impact – not just helping users identify trustworthy content, but potentially helping platforms prioritize it.


You can’t just put this information in front of people and expect them to understand it so we have to use our products we have to use our journalism to explain to people what this means.

Speaker

Charlie Halford


Reason

This comment is insightful because it acknowledges that technical solutions alone are insufficient – they require accompanying education and communication strategies. It recognizes the responsibility of media organizations to bridge the gap between technical capability and user understanding.


Impact

This comment introduced the critical element of media literacy as essential for C2PA success. It shifted the conversation from technical implementation to user education, highlighting that adoption requires both technological and educational components.


In the space where you’re looking to really have the public benefit and understand the integrity in your research and tests have you seen them recognize the sign the print the content label that’s on there and respond positively in choice or is it more at this point about interest in it?

Speaker

Amy Mitchell (Audience)


Reason

This question is thought-provoking because it challenges the distinction between user interest in authenticity features versus actual behavioral change. It probes whether C2PA creates measurable impact on user decision-making or remains at the level of expressed preference.


Impact

This question prompted important clarification about the current state of C2PA recognition and effectiveness. It revealed that while users respond positively to additional transparency data, C2PA as a brand lacks public recognition, highlighting the need for better communication strategies.


Overall assessment

These key comments collectively shaped the discussion by establishing a human-centered rather than technology-centered approach to C2PA adoption. The conversation evolved from technical specifications to real-world applications, then to implementation challenges, and finally to the critical importance of user education and platform adoption. The diverse perspectives from different global contexts (UK, Tanzania, Ukraine) demonstrated both universal challenges (brand impersonation, content authenticity) and context-specific needs (operating under warfare conditions, resource constraints). The discussion successfully balanced technical capabilities with practical implementation concerns, ultimately emphasizing that successful C2PA adoption requires not just technical standards but also media literacy, platform cooperation, and understanding of diverse global media environments.


Follow-up questions

How can we achieve broader adoption of C2PA, especially by the global majority?

Speaker

Muge Ozkaptan


Explanation

This addresses the need to expand C2PA implementation beyond current adopters to include more diverse global voices and organizations, particularly those in fragile environments and developing countries


How do we get more support from social media platforms for C2PA implementation?

Speaker

Charlie Halford


Explanation

Platform adoption is crucial since most media organizations get significant traffic through social media, and broader platform support would increase the technology’s effectiveness


How can we improve public recognition and understanding of C2PA branding and signaling?

Speaker

Amy Mitchell (audience member)


Explanation

Research showed that while extra data increased trust, there was no recognition of C2PA as a brand, indicating a need for better public awareness and media literacy efforts


What are the plans for integrating C2PA into existing regulations for information integrity such as EU DSA or UK Online Safety Act?

Speaker

Online participant (via Slido)


Explanation

Understanding regulatory integration could help accelerate adoption and provide legal framework support for the technology


How can we implement redaction capabilities in C2PA to protect people who might be at risk?

Speaker

Charlie Halford


Explanation

This addresses the need to balance transparency with safety, allowing removal of sensitive information like location and time data that could endanger subjects or photographers


How can we develop better security procedures for media organizations to manage their private keys?

Speaker

Charlie Halford


Explanation

Private key management is critical for maintaining the integrity and trustworthiness of the C2PA system for media organizations


How can we achieve device and tool adoption so C2PA is built into cameras and content creation tools by default?

Speaker

Charlie Halford


Explanation

Seamless integration into content creation workflows is essential for widespread adoption without requiring special technical knowledge from users


How can we better detect AI-generated content and improve the reliability of AI labeling in C2PA?

Speaker

Charlie Halford


Explanation

Current AI detection methods are not bulletproof, and improving these capabilities is crucial as AI-generated content becomes more sophisticated


How can we develop effective media literacy programs to help users understand what C2PA information means?

Speaker

Charlie Halford


Explanation

Simply providing technical information isn’t enough; users need education to understand and effectively use provenance data for decision-making


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #280 the DNS Trust Horizon Safeguarding Digital Identity

WS #280 the DNS Trust Horizon Safeguarding Digital Identity

Session at a glance

Summary

This workshop at the Internet Governance Forum focused on DNS trust and safeguarding digital identity, examining two key challenges: blockchain identifiers integration with DNS and online harm mitigation. The session was jointly organized by the Dynamic Coalition on DNS Issues and the Dynamic Coalition on Data and Trust, with discussions framed around UN Sustainable Development Goal 9 regarding resilient infrastructure and innovation.


The first topic addressed blockchain identifiers and their responsible integration with the existing DNS system. Speakers emphasized that while blockchain technologies offer potential benefits for digital identity, they present significant challenges including name collisions, governance issues, and threats to the single authoritative root principle that underpins DNS stability. Research revealed actual collisions between blockchain identifiers and existing domain names, with some blockchain providers creating top-level identifiers that conflict with established gTLDs and ccTLDs. Panelists stressed the importance of responsible integration rather than replacement of DNS, advocating for multi-stakeholder collaboration to develop standards and best practices that preserve DNS security while enabling innovation.


The second discussion focused on online harm mitigation, particularly addressing scams, fraud, and DNS abuse. Speakers shared various approaches, including Norway’s .no registry model that requires identity verification and limits domain registrations per holder, which has proven effective in reducing abuse. The Global Signal Exchange was presented as a new initiative to improve threat intelligence sharing across sectors, processing hundreds of millions of threat signals to enable faster response times. Multiple panelists emphasized that combating online harms requires coordinated action across the entire internet infrastructure stack, from registries and registrars to hosting providers and content platforms. The discussion concluded with recognition that these challenges require ongoing multi-stakeholder engagement and innovative approaches to maintain trust in digital infrastructure while supporting legitimate innovation.


Keypoints

## Major Discussion Points:


– **Blockchain Integration with DNS Systems**: The panel extensively discussed the challenges and opportunities of integrating blockchain-based naming systems with the traditional DNS infrastructure. Key concerns included name collisions between blockchain identifiers and existing domain names, the need for responsible integration rather than replacement of DNS, and maintaining the single authoritative root principle that ensures DNS stability.


– **Multi-stakeholder Governance and Coordination**: A recurring theme was the critical need for multi-stakeholder engagement to address both blockchain integration and online harm mitigation. Speakers emphasized that no single entity – whether government, private sector, or civil society – can solve these complex issues alone, and that established forums like ICANN and IGF provide essential venues for this coordination.


– **Online Harm Mitigation and DNS Abuse**: The discussion covered various forms of DNS abuse including phishing, malware, spam, domain spoofing, and cyber-squatting. Panelists shared different approaches to combating these harms, from Norway’s strict identity verification requirements for .no domains to Meta’s efforts to combat brand impersonation and the Global Signal Exchange’s cross-sector threat intelligence sharing platform.


– **Data Sharing and Real-time Threat Detection**: Multiple speakers highlighted the importance of improved data sharing mechanisms for combating online fraud and scams. The discussion covered initiatives like the Global Signal Exchange’s “Quick Factors” (quantity, immediacy, quality) approach and the need for faster mitigation times, currently averaging four days from detection to action.


– **Technical Standards and Best Practices**: The conversation addressed the need for developing technical standards for responsible DNS integration with blockchain systems, including work being done in IETF working groups, and the importance of maintaining DNS security through existing mechanisms like DNSSEC while considering future enhancements.


## Overall Purpose:


The workshop aimed to examine how the Domain Name System needs to evolve to address emerging challenges in digital trust and identity, specifically focusing on blockchain identifier integration and multi-stakeholder approaches to fighting online harms including scams and fraud. The session was designed to initiate multi-stakeholder conversations on these complex issues in the context of the WSIS+20 review and UN Sustainable Development Goal 9 (building resilient infrastructure and fostering innovation).


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, with participants demonstrating technical expertise while acknowledging the complexity of the challenges. The atmosphere was professional and solution-oriented, with speakers building on each other’s points rather than engaging in adversarial debate. There was a sense of urgency about addressing these issues, particularly around online harms, but also recognition that sustainable solutions require careful coordination and responsible implementation. The tone remained consistently focused on finding practical, multi-stakeholder approaches to these technical and policy challenges.


Speakers

**Speakers from the provided list:**


– **Emily Taylor** – CEO of Oxford Information Labs and co-founder of the Global Signal Exchange


– **Keith Drazek** – Vice President, Policy and Government Relations at Verisign (session moderator)


– **Benoit Ampeau** – Director of partnerships and innovation at AFNIC, the French internet registry


– **Swapneel Sheth** – Senior director of research engineering at Verisign in the office of the chief technology officer


– **Hilde Thunem** – Managing director of NORID, the Norwegian ccTLD registry (.no)


– **Lucien Taylor** – CTO and founder of the Global Signal Exchange, a global clearinghouse for real-time sharing of scam and fraud signals


– **Rima Amin** – Security policy manager, community protection with Meta


– **Graeme Bunton** – Executive director, NetBeacon Institute (participated online)


– **Edmund Chung** – From .Asia


– **Andrew Campling** – From 419 consultancy


– **Bertrand Lachapelle** – Executive Director of the Internet and Jurisdiction Policy Network


– **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)


– **Audience** – Individual from Senegal named Yuv (role/title not specified)


**Additional speakers:**


– **Dr. Esther Yarmitsky** – UK Department of Science and Innovation and Technology, has a PhD in internet governance, speaking in personal capacity (mentioned in introduction but appears in transcript as “Participant”)


Full session report

# DNS Trust and Safeguarding Digital Identity: A Comprehensive Workshop Report


## Executive Summary


This workshop at the Internet Governance Forum brought together leading experts from across the DNS ecosystem to examine two critical challenges facing digital infrastructure: the responsible integration of blockchain identifiers with the Domain Name System and the mitigation of online harms through coordinated multi-stakeholder approaches. Jointly organised by the Dynamic Coalition on DNS Issues and the Dynamic Coalition on Data and Trust, the session was framed within the context of UN Sustainable Development Goal 9, which emphasises building resilient infrastructure and fostering innovation.


The discussion revealed consensus amongst participants that both challenges require multi-stakeholder coordination rather than fragmented individual responses. Speakers advocated for integration rather than replacement of existing DNS infrastructure, whilst acknowledging the urgent need for proactive measures to combat the rising scale of cybercrime and online fraud.


## Opening Context and Framework


Keith Drazek, serving as moderator and Vice President of Policy and Government Relations at Verisign, established the session’s framework by connecting the technical discussions to broader policy objectives. He positioned the workshop within the WSIS+20 review process and UN Sustainable Development Goal 9, emphasising how DNS evolution must support sustainable development goals whilst maintaining stability and security.


Emily Taylor, CEO of Oxford Information Labs and co-founder of the Global Signal Exchange, provided brief opening remarks introducing the workshop before handing over to Drazek for the main facilitation.


## Blockchain Identifiers and DNS Integration


### Research Findings on Blockchain-DNS Collisions


Benoit Ampeau, Director of partnerships and innovation at AFNIC, presented research findings revealing actual collisions between blockchain identifiers and existing domain names. Working with the DNS Research Federation, AFNIC identified specific instances where blockchain naming systems have created identifiers that conflict with established generic top-level domains (gTLDs) and country code top-level domains (ccTLDs).


Ampeau provided concrete examples of these collisions, including blockchain identifiers like .wallet, .crypto, .corp, .om, and .mail that conflict with existing or potential DNS namespace usage. He emphasised that these collisions create security risks for users and potential instability for the DNS system.


### Technical Implementation Perspectives


Swapneel Sheth, Senior Director working in Verisign’s office of the chief technology officer, addressed technical challenges facing DNS-blockchain integration. He highlighted critical lifecycle management issues, asking: “How do we think about a domain name that’s transferred or expires after the domain name has been integrated into the blockchain application? How do we avoid risks with inconsistencies, with the security concerns that come along when the same names are used across multiple systems?”


Sheth noted that whilst DNS integrations with blockchain applications have potential, they require responsible implementation to avoid security inconsistencies. He reported encouraging engagement from the blockchain community through collaborative draft development in IETF working groups.


### Strategic Integration Approach


Dr. Esther Yarmitsky from the UK Department of Science and Innovation and Technology, speaking in her personal capacity, argued for integration rather than replacement of DNS infrastructure. She emphasised the importance of answering whether to “integrate this blockchain system into the global domain name system, or do we watch our infrastructure fragment in dangerous ways.”


Yarmitsky advocated for blockchain as a potential secondary security layer that could enhance existing DNSSEC capabilities whilst preserving the single authoritative root principle that ensures DNS stability.


### Industry Questions and Concerns


Edmund Chung from .Asia raised questions about the technical necessity of blockchain enhancements, noting that DNSSEC already provides cryptographic validation and questioning the added value of blockchain for DNS security.


Andrew Campling from 419 consultancy observed that Web3 naming schemes lack mature governance structures and could benefit from DNS governance lessons. He also raised concerns about the environmental and computational costs of implementing dual cryptographic validation systems.


## Online Harm Mitigation: Multi-Stakeholder Approaches


### The Norwegian Model: Identity Verification and Domain Limits


Hilde Thunem, Managing Director of NORID (the Norwegian ccTLD registry for .no), presented a detailed case study of how targeted interventions can effectively reduce DNS abuse. The Norwegian approach requires identity verification for domain registrations, including organization numbers for businesses and national identity numbers for individuals, and limits the number of domains that individual registrants can hold.


Thunem explained that this approach creates friction for scammers whilst maintaining accessibility for legitimate users. She provided a concrete example: “If you want to register santa.no, you have to prove that you are Santa Claus,” illustrating how identity verification prevents impersonation and abuse.


The Norwegian model has proven highly effective in reducing abuse within the .no namespace, demonstrating that well-designed registration policies can significantly impact abuse levels. Thunem also emphasised the importance of robust legal frameworks with clear responsibilities and due process protections.


### Global Signal Exchange: Cross-Sector Threat Intelligence


Lucien Taylor, CTO and founder of the Global Signal Exchange, presented an innovative approach to combating online fraud through cross-sectoral threat intelligence sharing. The Global Signal Exchange operates as a clearinghouse for real-time sharing of scam and fraud signals, processing threat indicators that have grown from 40 million to 270 million, rising by approximately one million per day.


Taylor highlighted a critical asymmetry: “The criminals are moving faster than us. They’re exploiting cross-border legislative tensions and sharing bad things between each other better than we share things.” The platform currently has 160 organisations in its accreditation pipeline, representing significant expansion of cross-sector collaboration.


### Platform Perspectives: Meta’s Approach


Rima Amin, Security Policy Manager for Community Protection at Meta, provided insights into how major platforms address DNS abuse and brand impersonation. She emphasised that DNS abuse accelerates harm across multiple threat areas including domain spoofing, cyber-squatting, and deceptive redirects.


Amin advocated for global solutions and consistent approaches rather than fragmented country-specific responses, noting that the borderless nature of the internet requires coordinated international responses.


### Data-Driven Insights: Concentration of Abuse


Graeme Bunton, Executive Director of the NetBeacon Institute, provided crucial data that reframed understanding of the DNS abuse problem’s scope. His research revealed that “95% of the malicious domains that we see belong to about 50 registrars or less, 80% belongs to less than 20.”


This concentration suggests that targeted interventions could be highly effective. Bunton’s data demonstrated that “the problem space is not huge” and “we can sort of wrap our collective arms around the scope of that problem.” He emphasised that proactive processes and automation are essential given the scale of abuse that reactive reporting cannot handle.


### Governance and Coordination Challenges


Bertrand Lachapelle, Executive Director of the Internet and Jurisdiction Policy Network, provided a systems-level perspective, observing that “this whole thing is a speed and scale challenge and it’s a data challenge. It’s a data sharing challenge.” He noted the emergence of new intermediaries that handle abuse workflow management.


Andrew Campling raised questions about governance gaps, particularly regarding country code top-level domains, noting “the real gap here is the lack of action by some of the ccTLDs” and asking “how do we get governments to also step forward to address this?”


## Areas of Consensus and Disagreement


### Multi-Stakeholder Collaboration


Throughout both discussions, speakers demonstrated consensus on the importance of multi-stakeholder collaboration. This extended to specific implementation approaches, with speakers advocating for coordination through existing frameworks rather than creating entirely new governance structures.


### Integration Over Replacement


Speakers consistently advocated for integration rather than replacement of existing DNS infrastructure when considering blockchain technologies. This reflects understanding of the DNS ecosystem’s complexity and the risks associated with fundamental architectural changes.


### Technical Value Debate


Despite agreement on integration approaches, speakers disagreed about the technical value that blockchain technologies could add to existing DNS security mechanisms. While some advocated for blockchain as a secondary security layer, others questioned whether blockchain provides meaningful improvements over existing DNSSEC capabilities.


## Emerging Challenges and Questions


### Blockchain Community Engagement


Questions arose about how to incentivise blockchain community participation in responsible integration frameworks, highlighting uncertainty about whether blockchain solution providers will engage meaningfully with DNS governance approaches.


### Scaling and Government Engagement


Multiple speakers acknowledged that current abuse mitigation processes struggle with the scale of modern threats. Questions about government engagement, particularly regarding ccTLD accountability, highlighted governance gaps in current approaches.


### Digital Identity and National Infrastructure


Questions from participants highlighted how many government institutions use generic domains instead of their national ccTLD, potentially creating cybersecurity risks and undermining digital identity frameworks.


## Conclusion


This workshop demonstrated both the complexity of challenges facing DNS infrastructure and the potential for multi-stakeholder approaches to address them. The discussion revealed that both blockchain integration and abuse mitigation require coordination mechanisms that preserve existing infrastructure stability whilst enabling innovation and improved protection.


The speakers’ emphasis on integration rather than replacement, proactive rather than reactive approaches, and coordinated rather than fragmented responses provides a foundation for continued progress. However, unresolved questions about blockchain community participation, scaling abuse mitigation, and addressing governance gaps highlight the need for continued engagement.


The workshop’s connection to UN Sustainable Development Goal 9 underscores that these technical discussions have broader implications for global development and digital inclusion. Success will depend on translating the collaborative approaches demonstrated in this workshop into concrete actions that preserve trust in digital infrastructure whilst enabling necessary innovation.


Session transcript

Emily Taylor: Good afternoon, everybody. Thank you very much for joining us this afternoon. You are at workshop 280, the DNS trust horizon, safeguarding digital identity. My name is Emily Taylor. I’m the CEO of Oxford Information Labs and a co-founder of the Global Signal Exchange. And we were asked to put together this panel this afternoon for two dynamic coalitions, the dynamic coalition on DNS issues and the dynamic coalition on data and trust. And thank you to those organizations for asking us to do it. So this workshop will look at the WSIS-20 and the issues of digital trust and identity through the lenses of blockchain identifiers and emerging namespace and multi-stakeholder voluntary measures to fight online harms including scams and fraud. Now, each of these issues requires the domain name system in some way to evolve, to cope with these emerging issues. And each has been a struggle because they’re complex in nature and they require the coordination of multiple stakeholders. We will hear from a range of speakers on the issues and the sessions will be moderated by my good friend and long-term colleague, Keith Drasek, who is Vice President, Policy and Government Relations at Verisign. So with that, Keith, I hand over to you and thank you very much.


Keith Drazek: Thank you very much, Emily. and welcome everybody to our workshop 280. And as Emily noted, this is a joint workshop proposed by the Dynamic Coalition on DNS Issues and the Dynamic Coalition on Data and Trust. And our view of this session is really in some ways the beginning of a multi-stakeholder conversation on two separate issues that Emily touched on. Blockchain identifiers and the need for responsible integration with the DNS and online harm mitigation up and down and across the stack with different roles and responsibilities and technical capabilities for the various actors in the stack. Each one of these really does require multi-stakeholder engagement and multi-stakeholder input. And we just want to call that, you know, this is sort of the beginning of that part of the conversation. So look for more opportunities in the near future to engage on these issues. So I’m going to go ahead and introduce our panelists here. But before I do, I just want to note that as we are here at IGF in a season of looking ahead to the WSIS Plus 20 review, we thought of this workshop in the context of the UN Sustainable Development Goals. In particular, SDG number nine, which is to build resilient infrastructure, promote inclusive and sustainable industrialization, and foster innovation. And really both of these topics, I think, are tied directly to that. And so we wanted to really demonstrate that those at the table, industry and other actors, are really engaged in trying to advance in this IGF context some work around that specific SDG number nine. So with that, let me go ahead and introduce our panelists. A little bit of housekeeping. We’re going to have probably five to seven minutes for each panelist to make some introductory remarks. And then we really do want this to be an interactive dialogue with you in the audience and you online. So we’re going to try to keep a good chunk of time. at the end here for the dialogue, and then we’ll probably save five minutes at the end for a little bit of a wrap-up stop taking for the rapporteurs. So, first panelist, and not in order of speaking necessarily, but first on my list is Lucien Taylor. Lucien is a CTO and founder of the Global Signal Exchange, a global clearinghouse for real-time sharing of scam and fraud signals. We also have Hilde Thunem, managing director of NORID, the Norwegian ccTLD registry, .no. We’re thrilled to be here in Norway, of course. Online, I believe we have Graeme Bunton, who’s the executive director, NetBeacon Institute, an organization established by PIR, the .org registry, that’s focused on helping the internet community identify and report DNS abuse, establish best practices, fund DNS research, and share data. We also have Benoit Ampeau, director of partnerships and innovation at AFNIC, the French internet registry. We have Swapnil Sheth, a senior director of research engineering at Verisign in the office of our chief technology officer. And we have Rima Amin, security policy manager, community protection with MEDA. And Dr. Esther Yarmitsky, UK Department of Science and Innovation and Technology. Esther has a PhD in internet governance. She’s here speaking in her personal capacity. And we are very, very happy to have each of you. So with that, I’m going to start off, because we have two topics, I’m going to lead off with the first, and that’s going to be the topic of blockchain identifiers and the DNS. And probably stop after the three speakers have had their chance to have an intervention, give their remarks, and give an opportunity for questions or audience engagement, participant engagement, before we move on to the next section. But I’ll be keeping an eye on the time with the help of Emily, make sure we keep to our schedule. And with that, let’s go ahead and kick it off. So, Benoit, I’m gonna turn to you first on the topic of blockchain identifiers and the need for responsible integration in the DNS. From your perspective, what are the main challenges to maintaining trust in DNS systems in the face of emerging technologies like blockchain?


Benoit Ampeau: Thank you. Hello, everyone. Delighted to participate in this session. So yes, I will talk about the importance of trust in the security of digital identities and the challenges posed by the emergence of the new technologies like blockchain in current internet naming system, the DNS. It’s opening discussion and also concerns. It’s a challenging size to present this broad, complex topic in such a short time. So I’ll do my best. The domain name system, as you know, currently constitutes a reference infrastructure for creating and resolving names on the internet. It’s available to all connected internet people for more than 40 years. During this time, we can observe some initiative for alternate naming systems that have emerged on a regular basis. For instance, I can mention Namecoin or even Gdunet, and they are seeking to establish themselves by exploring models other than DNS, but all partially inspired by DNS. Today, we observe a significant number of organizations creating and establishing naming system based on blockchain all over the world. For many years at AFNIC, we have been studying the theme of trust in DNS applied to different use cases and technical environments, and regularly are evaluating sorry, other integration of identifier namespace, such as those used in the Internet of Things industry or in the blockchain ecosystem. By studying both risks, but also potential complementarities of this identifier system with the DNS. We also conducted a study publishing a report last year on the possibility of blockchain actually replacing the DNS. And now we are currently evaluating for a future report on a more technical layer, the security level that a blockchain identifier system based on a public blockchain would offer on both registration and also name resolution services. In addition to that, we established with our partners from DNSRF here a roadmap of work on the current ecosystem and blockchain identifier solution providers. From name collisions, provider mappings, and their economic models to ultimately, later this year, develop a general risk assessment framework. So, very quickly, three outcomes of our studies. The importance of trust. Trust is essential for the security of digital identities. Without it, users and businesses cannot operate effectively online. Integrating new technologies like blockchain into existing DNS infrastructures presents unique challenges. Blockchain has proven to be very robust in the face of alteration of the data associated with identifiers, but it raises also questions about governance and standardizations. Global consistency and the trust of all stakeholders in DNS have yet to be built for blockchain. Second one, uniqueness of names in DNS and blockchain. These are two different methods to safeguard the uniqueness of names in their naming space. DNS, as you know, relies on hierarchical architecture and a system of delegation, which also provides decentralization, by the way. The uniqueness of names is ensured by a governance system coordinated by ICANN, which supervises the root of DNS through its technical function, IANA. And then, registries such as us are managing each TLD, top-level domain. on a delegated basis. The existence of a single rule trusted by all ensures that no name can be registered twice in the same naming space is key. In the case of blockchains, the naming space is generally regulated by smart contracts which defines the rules for registration and realization of blockchain identifiers. In theory, these contracts ensure the uniqueness of all names under a given contract and several smart contracts can exist on the same blockchain. However, the uniqueness of identifiers is not centrally managed and at the global level of all blockchains, they are operating independently. Therefore, it’s possible for the same identifier to be allocated by more than one blockchain, leading to duplication. The DNS RF study revealed collisions between blockchain identifiers and domain names. For example, some TLDs like .wallet and .crypto could pose security issues if adopted without adequate planning. Sorry about that. Remember last GTLD round in 2012? Printer.om, user.corp were used internal suffixes by corporation and I can receive the application for .corp, .om, .mail, so forth and so on. So we, with DNS RF, have examples of ccTLD and gTLD collisions. Of the three providers we found had created top-level identifiers that were the same as existing gTLDs. One with eight direct conflicts, the second one with four and the last one in one. On the ccTLD level, we also had collision with existing ccTLDs and even two other TLDs with two later top-level identifiers, not delegated by either. So, our work with DNS-RF aims to give a concrete view of the situation to better evaluate and provide a risk assessment framework for other concerned stakeholders, which is important here, institutions, policy makers, for their own purpose. As a conclusion, in a nutshell, maintaining trust in DNS is crucial for digital identities, DNS identities and other digital identities that are relying at some point on DNS. The integration of blockchain within the DNS could present opportunities but also significant challenges that could alter this trust. Proliferation of blockchain identifier systems makes them prone to confusion when resolving names. Finally, stakeholders’ involvement is essential to overcome these difficulties and understand the potential benefits but also, very importantly, the risk. Thank you.


Keith Drazek: Thank you very much, Benoit. And I just want to reinforce one of the things that you mentioned, and that is the importance of the fundamental foundation, the importance of the single authoritative route in dealing with matters in the DNS. And one of the challenges and one of the concerns that you’ve correctly flagged is the potential for duplication of records when there should be a single record. And so I think this is important both in today’s context but also looking ahead to the upcoming launch of a next round of new GTLDs in the ICANN space. There’s currently an application window that’s targeted for April of next year with likely delegation of some of those strings applied for perhaps a year after that. So this is a live and active topic when we’re talking about potential implications both at the technical and the policy level when it comes to expectations around these unique identifiers. So with that, let me turn to Swapnil.


Swapneel Sheth: Hey, thanks, Keith, and thanks for the opportunity for me to be on this stage, part of this conversation. So, domain names, as we know, have been long users’ identifiers, right? In applications, going back all the way, you can think about telnet, FTP servers, email services, and then later, domain names were adapted to be used for the web use case. So, what we’ve seen, though, in the past few years is that there is interest, and blockchain applications and decentralized applications have emerged as a new use case for user-friendly identifiers. So, as an example, blockchain wallets, we’ve all heard of blockchain wallets by now, and so blockchain wallets tend to identify users via a long alphanumeric string, which is human-unfriendly, much like IP addresses are unfriendly. And so, obviously, there is a need for users to be able to use something that’s human-friendly so that they can make their interactions with these blockchain applications easier. And I think that’s one of the reasons why we have seen dozens upon dozens of these alternative namespaces in blockchains, exactly for this use case, which is trying to make interactions much easier with these blockchain applications. What we are also seeing is there’s a lot of interest in using DNS domain names for these use cases in blockchains. and a lot of other people who have been working on this for a long time. So there’s a lot of interest for using DNS domain names in blockchains, and this chain of thought, this line of thought, where you can use a DNS domain name, integrate that with a blockchain application, we call that a DNS integration. So imagine when you’re trying to send cryptocurrency, right, you can use a domain name which you’re familiar with as opposed to using this long alphanumeric blockchain address. Now the thing is, DNS integrations come with their own set of challenges. For example, how do we think about a domain name that’s transferred or expires after the domain name has been integrated into the blockchain application? How do we avoid risks, right, with inconsistencies, with the security concerns that come along when the same names are used across multiple systems? These are really important topics, and without coordination, these systems will fall out of sync, and when they fall out of sync, they will give rise to, you know, unexpected user behavior, inconsistent behavior. And more importantly, these issues will lose or undermine trust that we have built in DNS over the last several decades. Don’t get me wrong, though, ultimately, I think blockchain-based DNS integrations have the potential to enhance the value of DNS domain names, but we believe that the way to get there… is we are responsible DNS integrations, so we can take the well-established benefits of DNS and extend that to these new use cases in blockchains. So safeguarding the stability, security, and reliability of critical internet infrastructure has been at the very core of what VeriZyne does. And alongside keeping the same values in mind, we’re also supporting development of responsible DNS integrations. So what have we been doing? We have published a variety of research papers and measurement studies to raise awareness of SSR issues that exist in today’s DNS integrations. We’ve also, we are actually actively working with the community and encouraging the community to come up with standards and best practices for responsible DNS integrations. I think this is where I see a great opportunity for collaboration. The DNS, along with this long-standing community in ICANN and IGF, have proven to be resilient and adaptable, right? DNS has well-defined standards and practices for transparency, for control, and for domain name lifecycle management. And I think together these principles can inform and should inform how we build and develop these new integrations with DNS. So now the internet success has been rooted in interoperability. Trust and collective ownership. And as we evolve the DNS to these new use cases, as we innovate, we must preserve these values. So here’s my invitation. Let’s work together. Let’s collaborate together so that we can use the existing critical DNS infrastructure for these new use cases, but let’s do so in a manner that supports our collective goals, which is to build a safe, secure, and reliable internet ecosystem. Thank you.


Keith Drazek: Thank you very much, Swapneel. So I’ll turn next to Esther, but I just want to remind everybody that when Esther’s concluding her remarks, we’ll turn to the audience, turn to you for any questions and comments and input that you may have on this particular topic. And of course, our panelists are more than welcome to engage together and compare notes in any conversation that they’d like to have. So thank you. Esther, we’ve heard about various national approaches to digital identity and online safety from the UK’s Online Safety Act to emerging blockchain systems. As AI transforms both the challenges and the potential solutions, what do you think needs to happen at the global level to address the trust and security challenges? Thank you.


Participant: Thank you so much for your question, Keith. I really believe that the choices we will make in the next two years will determine a lot of whether the internet will remain a stable and trustworthy source, or whether it will become a great vulnerability for us. I will explain why in my remarks. Before I begin, just to reiterate, this is not UK government policy, and it is based on my research on AI, but also institutional governance. I think that when Tim Berners-Lee created the World Wide Web, he was probably not expecting how much of our global e-commerce system and our economy would depend on the structures and the protocols that we have in place today. The internet is the backbone for 5.4 billion internet users, which is an incredible number that we hope will grow and reach everyone that hasn’t been connected yet. Living in this type of environment also presents a lot of risks and challenges. I know that we will move to the topic of fraud after this, but I just wanted to highlight… and Rima Amin, and we are here to highlight how important it is that we maintain a secure DNS system as in the United Kingdom alone, fraud accounts for 40% of all crime and 80% of that is cyber. So while all of this is intensifying and we have a lot of issues of fraud in the current DNS system, new naming systems that we are discussing today are emerging in parallel with the global domain name system. So what is happening in the global domain name system is that there is a lot of information, a lot of logic, standards, and their own risks. ICANN, which we respect and love to be the important body that is keeping the internet stable and interoperable, acknowledges that blockchain naming systems are being built outside of the global domain name system. Actually, I just wanted to give you a number that I have been working on for the last couple of years, and it’s a really good time to talk about web 3.0, but also about web 3.0 groups preparing to apply to the next round of GTLDs at ICANN in 2026, and so this is a really good time to discuss these challenges. So the critical choice I want to highlight is that we need to answer the question, and my co-panelists have addressed it, do we integrate this blockchain system into the global domain name system, or do we watch our infrastructure fragment in dangerous ways in which fraud will likely just intensify? And the question that we’re facing today is definitely not whether these systems will emerge and whether this threat will exist, because it already does, and it will, and I think we’ve already moved past that question absolutely. And I am proposing very much a multi-stakeholder approach to really tackling this. And the different stakeholders in the next round will have already been and Rima Ampeau. So, I’m going to talk about the infrastructure that integrates these two systems. Some of the things that I’m thinking about is also connecting it to some of the GDC and SDG lines that we have. So, under the GDC principle 2, it would be very, very smart to integrate cryptographic identity into DNS queries. So, there would be a lot of security, a lot of security, a lot of security, a lot of security, but it would be a very, very smart layer to what you are accessing. GDC principles 3, 4, and 5 would also enable federated AI systems to detect fraud in realtime on top of the structures that we have today. And how does this work? Well, we know that blockchain identifies, they create really strong, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure, secure. So, the GDC principle 3, 4, and 5 would be a great opportunity to combine these two approaches. This would strengthen, actually, the root principle that maintains today’s internet stability and universal connectivity, and that’s why it’s important that ICANN pays attention to this now and really leverages the next round to ensure that AI is not just a tool for the internet, but also a tool for the future, and that’s why we’re here today. So, the GDC principle 3, 4, and 5 would be a great opportunity to combine these two approaches. And how does this work? Well, we know that blockchain identifies, they create strong, This is very important because if we try to soak the points, to succeed, if we try to replace the DNS it will be really bad for the internet. We need to integrate. and the other is that we are trying to integrate any new technology and innovation into the system we have today. And to conclude, being a multi-stakeholder process is the only way forward. No government, no private entity, no civil society group can solve this alone. This needs to be done within the forest that we have today, especially within ICANN, the GAC. It needs to be actively involved to make sure that the next round is successful and integrates these really important thoughts that we are discussing today. Thank you.


Keith Drazek: Thank you very much, Esther. I think this is a really important point to say. Innovation needs to be supported and encouraged, but it needs to be done in a responsible way. So I think each of our speakers so far has really reinforced that issue. I really appreciate your focusing this on the need for the multi-stakeholder engagement around this topic. I think that’s critical and certainly one of the reasons that we’re here today at IGF bringing this to the community’s attention. So with that, let me ask if there’s anybody who would like to come to the microphone or to get – actually, there’s one on each side – or to get in queue, checking to see if we have anybody online. I would say we have five or ten minutes to discuss this topic before moving on if we’d like to, or we could move directly into the online harms discussion and then come back for questions and comments at the end. Edmund, go right ahead. Hello. Welcome.


Edmund Chung: Edmund Chung here from .Asia. I just wanted to pick up on the integrating blockchain and some of the emerging technologies to how we manage the DNS. Two things I want to highlight. One is I thought the question about including cryptographic technologies to the resolution process is quite interesting. I just wonder, currently the DNSSEC protocol – security extensions, do do that. I’m just curious how blockchain would add to the DNSSEC part. The other thing that came to mind is that what about the registration data? I actually personally find the registration data to be quite useful. I mean, blockchain might actually be very useful for registration data, especially for domain transfers, ownership transfers, and authentication of those issues because of the nature of blockchain. See if there are any thoughts on those two things.


Participant: Thank you, thank you so much for your question. And definitely DNSSEC, very good that you brought it up and thank you. I wanted to get into that as well. It’s true that it already provides some form of validations in the DNS responses to prevent that tampering. But this blockchain enhancement really just builds on top of that, I would say. So it would exist as a form of secondary security layer on top of the DNSSEC. And that would be similarly to what would happen when we need to develop quantum resistant data defense processes.


Benoit Ampeau: Yes, I would just add that DNS is secure today. DNS domain names are portable, they’re flexible, and they’re secure. Security is provided by DNSSEC as a means of authentication and otherwise email and web traffic are supported via the encrypted protocols. So I think I want to reinstate that DNS is secure today. Aziz?


Keith Drazek: Okay, thank you very much for that. And Edmund, thank you for that question. We have another speaker. If you could identify yourself please and then go ahead and ask your question.


Andrew Campling: Yep, hi there. Thank you. Andrew Campling. I’m from a consultancy, 419 consultancy, which amongst other things spends time thinking about the DNS. I’d echo the comments about DNSSEC. And yeah, so I’m skeptical that there will be value add on that specific point, doing two lots of the same thing. I think, particularly when I know there’s post-quantum being tested to extend DNSSEC anyway. And we have to think about the compute cost, the environmental cost of doing that twice. So I think we should be cautious heading in that direction. The point I wanted to make, though, some of the, let’s call them the Web3 naming schemes, have as a feature of their creation that they have no governance. So some of the ones that don’t have it as a feature have fairly immature governance, not by design, just because they’re still very early in their evolution. And I think there’s a lot of very useful lessons that could be carried across from the approach to governance in the DNS into those systems. So there will be benefit from integration, at least at the governance level, if not at the technical level. So I don’t know if any panelists want to comment on the governance points.


Swapneel Sheth: So I don’t know if you’ve been following DNSOP, Working Group, and IETF. And so we have a couple of drafts that we’re working on towards responsible DNS integration. And one of the ones was recently, as of early this week, was adopted by the Working Group. And it talks about what are the considerations for integrating DNS domain names into blockchain namespaces or blockchain applications in a responsible manner. So it’s sort of a checklist of things to go through as you’re building your integration. And hopefully, by the end of it, you will have a responsible integration. And that will obviously have the governance of the DNS because it’s it’s routed into the DNS. I can manage DNS route if that helps answer your question. Yeah and going back just a plug-in here for the since you bought a post quantum DNSSEC. If you are interested in that topic we are actively working on it. We will have hackathons and we’ll also have a PQ DNSSEC side meeting at the upcoming IETF meeting. Please join us and feel free to contribute.


Benoit Ampeau: fully agree with what you said but from the blockchain provider solution perspective we do not know yet if they would like to engage in this way in the responsible integration. So basically we know internet it’s complex we all know this. Maintaining, providing consistent user experience for a stable resilient secure internet it’s complex. So we’ll see what the future will be by integrating this kind of factors into the DNS ecosystem at large.


Swapneel Sheth: I’ll make another point while we’re on the topic is the interest from Web3 community. So the draft I just talked about I have co-authors from ENS which is a alternative namespace in the blockchain Ethereum name service and another co-author is Blue Sky which is a decentralized social media namespace that’s trying to use DNS domain names as social media handles. So I just want to say that there is enough interest from the Web3 community to integrate responsibly as long as we have we are willing to work with them.


Participant: And maybe just to add again to emphasize that this is important to think about now because now ICANN has an opportunity to seriously engage with as I mentioned earlier it is a growing industry there is interest but it’s our responsibility coming from kind of the ICANN and traditional DNS system to to engage and that’s also within our hands.


Keith Drazek: Thank you very much and Andrew thank you for the question and the engagement. we have two quick questions I think in the chat that Emily will read and then we’ll probably pivot and move on to the online harms discussion and we can always come back to this at the end so Emily.


Emily Taylor: Yes so we had a comment from Luke Siffer saying okay I’ll sum it up it’s not that complex the entire web 3 4 5 blockchain saga is merely a creatively lazy attempt to monetize the internet by fragmenting DNS with an alternative route nothing revolutionary here just reheated hype time to move along Bevan Wathen did a thumbs up and an agree to Esther’s call for for integration and not replacement of the DNS and a Carolina from Oxhill hello Carolina asked two questions how do we ensure responsible integration happens in a multi-stakeholder manner and also how to get the blockchain community to participate what incentives exist for them to participate I think a lot of those the second question you touched on in in the just the recent remarks thank you.


Keith Drazek: Yeah thanks very much Emily and I think to Carolina’s question about you know what are the opportunities for engagement I think the dynamic coalitions that are represented here I think our future opportunities for continued engagement in a multi-stakeholder way on this conversation but probably not the only options so we should be creative and think about how to reach out and engage folks from from this particular community but also other multi-stakeholder actors and perspectives to make sure that we have a well-informed and broad sort of understanding of the various concerns and opportunities so thank you for that we will now pivot and move on to our discussion on online harm mitigation I’ll just take a couple of minutes to give some context and maybe frame the discussion over the last five ten years and Benoit Ampeau. The third type of abuse is phishing, farming, malware, botnet, command and control distribution and spam when spam is used as a delivery mechanism for those other four. Obviously, that is just a subset of the broader topic of online harms, right? So that’s, you know, DNS technical abuse. There’s obviously other online harms that are related to content, and there are a number of different actors in the system, and there are a number of different technical capabilities to mitigate abuse at the most appropriate time, the most appropriate level, without disproportionate impact on other actors in the system. So I think what we’re going to talk about today is the broad topic of online harms, and with that, I’m going to turn first to Hilde. Hilde, a question for you. The Norwegian top-level domain .no is an example of a very broad topic of online harms, and I’m wondering if you could talk a little bit about the reasons why online harms are being used for abuse like phishing, malware, and spam. Could you share some insights from your perspective into what could be the reasons, or what could be the reasons behind that? Thank you.


Hilde Thunem: ≫ Thank you for having me here on the panel. I’d like to start by saying that just like in the offline world, I think online abuse rates are a very important part of the world, and I think that the Norwegian model and the Norwegian approach to the .no domain name provides one example of how many different stakeholders can work together and have a positive effect. So all domain registries, people like me that hand out domain names, we operate within a ecosystem of the local law where we’re based and the registration policy. And one of the factors that influence the type of neighborhood that sort of grows under a top-level domain is the requirements that the registry imposes on those wanting to register domain names. So the registration policy for NO is shaped by NUDID, but we do this in consultation with different stakeholders in the Norwegian society and within the domain regulation that provides a sort of framework for the basic principle of this. And one of the requirements we have is that anyone who wants to register a .no domain name must identify themselves by providing either the organization number registered in the Norwegian Register for Business Enterprises, and foreign companies can do this if they have a Norwegian subsidiary, or as an individual to have a national identity number registered in the National Population Register. So if you worked in Norway for a long time you get one of these. And before granting the right of use to a domain to anyone, we verify that they exist in one of these official registers. So we look it up and this ensures that each NO domain name is registered to a real individual or organization who is responsible for how the domain is used. So sad to say, and I hope I’m not breaching any childhood dreams here, but Santa Claus does not have a .no domain name because he does not exist. But of course this is not only a sort of registry only effort, because we don’t talk to the domain holders directly. So it’s the registrars who have the direct contact with their customers that are required to know who they are, and to sort of ensure that the one contacting them actually represents the organization that they are. are trying to register a domain for. But how they do this is left to the registrars, because that varies widely if you’re a small registrar that knows every customer personally, or you’re a large registrar with different control systems. And then we also have a very, I think, fairly unique Norwegian rule that there is a limit to how many domain names each domain holder can have. So if you’re an individual, you get up to five domain names, and if you’re a company, you get up to a hundred. And the rationale behind this is that domain names are a limited resource, or good domain names are that. And sort of in the Norwegian way of, there must be some cake left on the table for the latecomers. We want to keep some domains still there so that early adopters don’t get to take them all. So both of these requirements are there for other reasons than fighting online harm. But they have the happy side effect that they irritate the scammers a lot. Because first of all, someone wanting to register a domain to use it for illegal content, for scams, for spam, they have to either identify themselves or steal somebody else’s credentials. And when they do, and they sneak past the registrar’s control mechanism, they get only a hundred domain names, or five if they stole somebody’s personal credentials. And that’s kind of friction for those that need to burn through a lot of domain names in order to spread their scams. At the same time, the whole point of making it slightly difficult for the criminals is also not to create a big burden on the legitimate domain holders. Because we want people to have domain names. We want them to have their little corner of the internet where they have ownership of the content they produce instead of just… just being at the online large technical social media platforms. And so, this makes NOF a fairly safe space, but of course there are Norwegian criminals and there are other criminals that steal credentials. So, in the cases that domain names are used to commit a crime, then the rest of the regulatory ecosystem comes into play. So, the Supreme Court in Norway established as early as 2009 a principle that it’s the domain holder that holds the responsibility for the use of the domain name. And since that is actually a real person or an organization, there is a place to start if one wants to take action. And this year, the revised Electronic Communication Act provided further clarity by putting this principle into law. So, as a last resort, when proportionate action may be taken against the domain name. But such measures requires a process that safeguards the legal rights of all the involved parties. And this is especially important because for top-level domains like .no with the presence requirements, almost all of the domains that are used for the technical online harms like phishing are compromised domains. So, the domain holder is a victim that has his website or has his domain compromised and not necessarily the perpetrator. But in those rare cases where a domain name needs to be taken down instead of the content acted upon, Norwegian police have a clear mandate in law to seize domain names. Similarly to what they can do in the offline world where they can seize a car or a gun or a dog if it has bitten someone and to keep as part of a case that’s raised. And just like in the offline world, also in the online world, when they seize a domain name, they have to follow the requirements for due. process. And the Consumer Protection Agency have the same sort of power to go and require domain name to either be deleted or transferred in the cases that it’s serious online harm to consumers as a whole. But in those cases, they have to go to court just to prove that they have tried less impactful actions first. So in summary, I think it’s the combined effort of the registry and the registrars as part of the registration process, and then the regulatory framework and the public authorities, both providing official databases we can use, but also acting when illegal content or other online harm is being a problem.


Keith Drazek: Thank you very much, Hilde. And I want to just touch on a point that you raised, and that’s the important distinction between domain names that have been registered with malicious intent or for the explicit purpose of perpetrating fraud or crimes or online harms, and compromised websites or compromised web hosts. And I think you also noted that there could be an instance where a domain name that was registered with perfectly legitimate intent had an account compromise. So it actually could be there’s sort of a range of possibilities there in terms of the use of the domain name. But I think you also reinforced an important point that depending on the nature of the harm or the nature of the abuse, there could be action that’s appropriate at the registry level or the registrar level or in combination somehow, or there’s the need to engage the content layer of the infrastructure stack to make sure that the web hosts, the CDNs, are also involved in instances where a website’s been compromised, because they’re the only actors that can take the surgical act needed to be able to address that particular bit of harm. And so proportionality is important in all of that, but Thank you for all of that. Thank you, Hilde. Okay, I’m gonna turn to Lucian next. Lucian, on day zero, we had, there was a great session, I recommend it to everybody, on online fraud and scams. And so we heard in that session from you about the Global Signal Exchange, your new initiative to address a number of challenges in tackling scams and fraud. So curious what you’d like to share on that and how it’s different from any other initiatives. So, Lucian, thank you.


Lucien Taylor: Thank you very much, Keith. I just wanted to say, I think we’re gonna do our little speeches and then there’ll be questions after that if you wanna take a rest. Okay, thank you very much. Yeah, Keith, thank you for that. And three short answers, and then I’ll extemporize. One is, the first one is that we didn’t just dream up the idea of the GSE ourselves of sort of building a Global Signal Exchange. A number of organizations came together in a multi-stakeholder community and asked for that organization to be created. In other words, that it was missing in the current fight against scams and fraud. And so how are we different? Well, we seek to change the game in the effort to tackle scams and fraud. And finally, in the answer to the… Well, it’s a point that I generally want to say we’re here at the IGF and we really think that this is the ideal sort of space, this multi-stakeholder environment to discuss these sorts of things. But I’ve also been hearing about the Internet Infrastructure Forum and others where these are safe places which are less polemic where we can actually come together and figure out how to solve this without kind of getting into a circular firing squad. So to dig in, my first point, was the creation of a new data signal sharing entity. The need for a cross-sectorial international signal sharing platform was… identified in the Global Anti-Scam Summit in Lisbon in 2023. And the GSE brings together a number of partners to deliver a new service to fight scams and fraud. Currently, we’ve got 160 organizations in the accreditation pipeline. So there is a strict accreditation process. We’ve got the commitment from four big tech, including Meta, thank you, Rima. And we’ve got a huge new opening, proprietary threat intelligence from Google, which is opening, they’re opening up their own in threat intelligence to this new idea, this new venture, and trying to less depend on these sort of lots of threat signal bilaterals and have a single service to go through as a kind of broker. We also have, we’re in negotiations with several governments and law enforcement bodies. So how are we changing the game? Cybercrime is rising relentlessly. I don’t think any of us can argue with that really, seriously. I’ve asked my family to look at their phones and give me some WhatsApp examples. And they’ve just got dozens of, you know, you’ve got to pay a fine, you’ve got to pay for, you’ve got a new bill from some car park or some tax office. We’re under this relentless pressure all the time to reevaluate the things that we’re being presented with. There are a number of initiatives across the internet supply chain, verticals, we call them verticals, that are doing good things. But cybercrime, the vector is still increasing. When I talk about the internet supply chain, I talk about the supply chain that’s available to scammers and fraudsters. That is building their infrastructure. A fraudster will build a domain name, identity, register a company, build a website, benefit from content delivery network. and so on. They will then establish false IDs on social media channels and on email and others. They will then easily engage with the potential victims through chat, through email, through messaging services. And finally, step four, I call it, a banking commitment is made, a crime is registered. And at that point, we know we’re dealing with a criminal. And then they package those fraud services and recycle them and make them available and actually provide a fraud and scamming industry for others to enjoy. So the criminals are moving faster than us. They’re exploiting cross-border legislative tensions and sharing bad things between each other better than we share things. So the GSE aims to deliver new things. First of all, face up to the governance and policy challenges. And they are considerable. We’ve been talking about them in ICANN and IGF for decades now. And secondly, address the technical challenges. Now, in terms of the governance and policy challenges, we are tackling head-on the cross-border international and cross-sectorial challenges. And we’ve hired good lawyers. And I’m not going to even bother to talk about all of that today. Thank you very much, Emily. In terms of the technical challenges, I’ll get back into my comfort zone. We’ve invented our own acronym called Quick Factors, QIQ. You can’t invent a new organization without some new acronyms. First, those quick factors are quantity, immediacy, and quality. In terms of quantity, have we got enough data to reflect the actual problem, to reflect the problem that consumers are suffering? In January, we had 40 million threat signals. Let me put that into context. The action fraud, City of London Police, they’re getting 30,000 threat signals every month. We’ve risen from 40 million threat through the Google stack, up to 270 million threat signals. They’re rising by a million threat signals a day. And we still believe we’re not seeing half of it. Hopefully when Meta and more come on board and start supplying those signals, we’re going to start to see really what the consumer is suffering from. We want those more signals to be provided for the participating organizations. And we call this uplift. Uplift is when all parties share signals and thereby find new information for themselves over and above their own stocks of threat intelligence. We observe uplift. We also want to reduce the cost of signals for the smaller players. Immediacy. We need to make things quicker and reduce the time to live for scams and fraud online. The time between a signal reported and a signal being mitigated needs to be brought down from an average of four days between detection and mitigation. Esther mentioned the need for federated models and quantum big computing power and AI to move towards real-time threat detection to identify these clusters as they’re happening. Finally, quality. To tackle both the quality of the signal and the provider, these are impacted by two things. Confidence scores and feedback. So a signal provider can attach their own confidence score to a signal and this can be improved by what we call overlap. When all parties share signals and simultaneously detect the same signal, we increase confidence. And the second big part of our work is to develop a feedback loop. This is a concept that came from cybernetics. It’s something that I employed in 2023 and started talking about because it’s missing in the game. And the feedback loop is an enormously challenging bit of work. you can’t just provide feedback to threat intelligence signals which are low quality kind of neighborhood watch type things. These are not evidence-based pieces of data that will stand up in court. So the signals are absolutely essential. I’m running out of time so I’ll just summarize. We have a number of pilots with registries, registrars, advertising communities, big tech doing handshakes, and police and law enforcement. Thank you very much.


Keith Drazek: Thank you very much, Lucian. And, you know, I think what you’ve described is a clear need for an intermediary, an aggregator of data, clearinghouse, platform of data sharing between threat reports, threat intelligence, reporters of abuse, and the operators of infrastructure that have the capability to address that abuse, right? So thank you very much. Just time check, we’ve got 18 minutes left. We have two panelists yet to speak and I’m going to try to keep a few minutes, five minutes at the end for any questions and engagement from you. So, Rima, if I could turn it over to you. Thank you.


Rima Amin: Sure. Thank you. So I’ll start by saying that our team in security policy work to counter adversarial threats in a number of different areas. So that tends to cover influence operations, cyber espionage, hacking, and frauds and scams. And throughout all of those different areas, the evidence shows that DNS abuse accelerates the harm to people and businesses across the board. Our teams are really focused on working to prevent, mitigate, and stay ahead of these threat actors that are looking to abuse sort of matters platforms and violate our policies by redirecting users off over to malicious. off-platform links, but I think as everyone has kind of said here today that this is a internet ecosystem problem, so we need to really have that sort of multi-stakeholder approach to be able to responsibly manage and mitigate some of these DNS abuses. Just to touch on a couple of the sort of the key areas that we’re concerned about and that we see, so the first sort of being domain spoofing, where domain names are created closely to resemble legitimate ones in order to deceive the people using our platforms. We also see them being used to sort of phish people online and sort of steal sort of their credentials. The second area relates to sort of cyber-squatting and domain impersonation, and impersonation of things like businesses and sort of well-known brands, again created to lure people into thinking they’re into a safe space that they sort of know, and sort of commit harms towards those people. The third is relating to deceptive sort of redirects, so adversarial actors may attempt to route users to malicious websites by making them think that they’re visiting a legitimate one, and then they get thrown over to a harmful website potentially with sort of malware and other harmful things. One emerging area that we are seeing is the use of link aggregators and shortners, so we’re seeing threat actors really sort of leverage those in order to sort of evade URL impersonation that might be sort of easier to detect, so that’s one sort of area that is emerging. Just to dive a little bit more into the frauds and scams space and how the DNS and Rima Ampeau. So, I’m going to start with the accounts side. So, if you are a fraudster, you’re most likely to use a fake account or a compromised account. Compromised accounts are particularly lucrative because they have legitimacy and sort of history behind them. And those accounts may also be used to sort of manage sort of different business profiles, et cetera. So, for example, if you are a fraudster, you’re most likely to use a fake account. So, one way that they might try to gain access to that account is, again, through sort of malicious links, which would sort of install malware and steal credentials and a bunch of other different things there. Once identities are created to sort of Lucien’s point earlier, the actor will try to engage with their victim. So, for example, if the victim is trying to engage with a bank, they try to engage with a bank and they get a message from a bank and they add a message and whatnot, then the victim is often taken over to a website. Now, that website might be sort of impersonating a particular shop. They try to buy a product. They no longer receive a product. They try to go back to the website. They don’t get any recourse. And then they go over to the banks. And then they try to go back to the website. So, that’s one example of how the victim is trying to get out of a platform and make sure it doesn’t reemerge. But to Lucian’s point about how long they stay on the internet, they still continue and continue to exist on other platforms and cause harm. A couple of things that we’ve been doing also to protect the misuse of sort of Meta’s brand, we hope we can continue to use that. In summer we started in 2024 last URLs that came from sort of Vietnam. We’ve also been able to take down 9,000 URLs that were impersonating sort of WhatsApp, Facebook, Meta, Instagram, threads and reality labs. So we are able to take some action, but we do think more is needed. To go back to the point about these websites existing on the internet, we take efforts to share the intelligence and signals that we have. So we do that through sort of existing signal sharing programs that we have with industry. And we also think GSE has a lot of potential, especially because it’s not just industry focused, but because there’s sort of cross sector sharing that is happening there. In terms of sort of moving forward here, a couple of things that we think would be really helpful. I think the first is having sort of global solutions. We’ve seen some really good sort of practice here today. And I think bringing those into sort of global context would be helpful because of the nature of sort of the internet. We see a lot of countries trying to tackle this sort of in their own way. And so if there was a consistent approach, we think that would be incredibly helpful. We also sort of advocate for sort of transparency and accountability policies to navigate DNS abuse, including sort of areas to help with authentic engagement online. The sort of remediation side, so making sure that abuse is mitigated sort of as promptly as possible. And we also support the whole of sort of community cooperation here because we do understand that it is a complex problem. We all only see different parts of it. And so we actually just need to be pulling these pieces together.


Keith Drazek: Okay. Thank you very much, Rima. I think that last point is really critical, and that’s collaboration, cooperation, information sharing up and down and across the stack, and also to both of your points about the need for cross-sector engagement. For example, the financial processing transactions industry, you know, they have information that would be very helpful to other parts of the Internet stack. That’s just one example. So thank you very much for that. Appreciate it. Graeme, I’m going to turn to you next, and then we’ll probably try to keep five minutes at the end for questions and answers and community engagement. So Graeme, over to you.


Graeme Bunton: Thank you, Keith. I will try and be brief. First of all, apologies that I couldn’t be there in person. I’ve got a pretty small kid at home and have been traveling a bunch, and it turns out that generates some difficulty sometimes. And really appreciate being able to participate in this panel. I’d like to share here a bit today on some of the work that we’ve been doing to try and disrupt online harms and what we’ve learned in that process and how we think that can contribute to further work within this community. So first, a little bit about the NetBeacon Institute. It was created by Public Interest Registry in 2021. PIR is the operator of the .org TLD and is a not-for-profit and needs to do good works in service of that not-for-profit mission, and really felt like there was a gap within the ecosystem around issues of DNS abuse, that there wasn’t someone in the middle of that focused on this issue, working across the community within ICANN and outwards to try and educate, collaborate, build tools and resources to try and disrupt DNS abuse. And so the institute was created to try and fulfill that need. We’re not commercial as a part of the not-for-profit. I’ll talk a little bit about the services that we offer, but we don’t do anything for fee or cost recovery. All of what we do is free. And so as we began this work with the mission of trying to make the internet safer for everybody, we first needed to understand the landscape of DNS abuse. And so we created a project called NetBeaconMap, Measurement and Analytics Platform, which is a free and transparent, academically robust attempt to measure the prevalence of DNS abuse across the ecosystem, as well as things like concentration, mitigation rates, and median time to mitigation. And we do all of that work in partnership with CoreLabs out of the University of Grenoble, an academic there named Professor Maciej Korczynski. And so we’ve been providing this data publicly to the ecosystem for three years, I’ve lost some sense of time being stuck in this room, and really trying to enable the multi-stakeholder community to try and do data-driven policy discussion and development, as well as really drive industry action based on rigorous data. And so what have we learned from that? Well, a couple of things. One is that 95% of the malicious domains that we see belong to about 50 registrars or less, 80% belongs to less than 20. And so on a malicious domain front, in a way, that’s good news. The problem space is not huge. There’s differences between the registrars and TLDs also in that data. But we can sort of wrap our collective arms around the scope of that problem. There are changes that we can make. There’s ways that we can bring all of these parties together and improve the situation. We can see now the changes within the industry based on the ICANN contract amendments that came into effect last year, where we begin to see the larger, more active players getting and Benoit Ampeau. The DNS abuse rate is incrementally better, but they are close to diminishing returns, I think, on issues of DNS abuse for the sort of large, more engaged registrars in the space, and we can see abuse concentrating now in a smaller number of more highly abused registrars and TLDs. Right now, we see a really acute issue with two registrars, with very large abusive campaigns happening, and we’ll publish more on that in a moment, but I think the DNS abuse rate has begun to influence how we begin to approach this problem and think about it, and that really led us to how can we begin to disrupt these things, and so we built NetBeacon Reporter, which is a conduit for abuse reporting that anyone can use via web form or API, and use it to submit abuse reports to any gTLD registrar or participating ccTLD or registry, as well as we distribute to hosting companies and CDNs, so what we’re trying to do is take abuse reports in, we standardize them, we enrich them, we make them better, we reduce the technical burden on the reporter, and we distribute those abuse reports to multiple layers of the internet stack to try and disrupt those harms. That work was directly responsive to some multi-stakeholder outputs, SSR2 and SSEC 115, which is a multi-stakeholder report, and we also have a multi-stakeholder report on SSR2 and SSEC 115, if you speak ICANN, most specifically, and so we’ve been running that now since 2020, and doing somewhere around the realm of about 20,000 abuse reports a month, and we learn an awful lot from that sort of volume. We’re getting a lot of feedback from the hosts and registrars that we’re reporting to on the quality of those reports, and we can see who’s taking action, when, and why. And going back to some of the points made by the other panelists, especially around the Thank you, Luchin, for that. Boy, it seems very clear that improving reactive processes around abuse, there’s still some room there. We can do better at evidence gathering and we can do better at getting abuse reports to registries, registrars, and hosts in a timely fashion, and we can get better at helping them respond quicker. And all of those, I think, are interconnected. Lastly, it seems really clear that we need improved, reliable, and accessible proactive processes. Abuse is happening at such a scale that trying to react all the time isn’t sufficient. There are days where we have sent 6,000 abuse reports out, 7,000 abuse reports out to individual registries or registrars, and that just doesn’t work. It just doesn’t scale that way without some form of automation, but really it’s about getting in front. And so how do we think collectively about getting in front of some of these issues? And lastly, a point I want to make about trust and users is that I think we can rely a little bit on trust based on behavior within these systems rather than identity, because behavior on that platform, how many domains have you registered? How many, how long, how old is your hosting account? Those attributes can’t be faked and feel like a really good place to begin building trust on as we begin to think about who has access to these tools and resources. I’ll stop there. I know we only have a few minutes. Thank you very much for the time.


Keith Drazek: Thank you. Thank you very much. Thank you very much, Graham. Really appreciate it. We have like three minutes left, so I’ve got two people at the microphone, if we could be brief. Oh, and I’m sorry, we have three, yes. So let’s try to fit in at least the three interventions. So go right ahead. Thank you so much.


Audience: Hello. Thank you for these wonderful interventions. My name is Yuv, I’m from Senegal. I really appreciate the topic, especially the digital identity, the DNS. Senegal has had a DNS since the 80s, but in 2025, some ethnic institutions send emails, not with the .sn, but with a .ya or a .go, a .ya or a .gmail, which even constitutes a risk, since the question of cybersecurity arises. And recently, the State of Senegal was the victim of two cyber attacks. So, as an expert, what would you propose to the State, so that we can use the .sn in administrative services, so that there is a digital identity, but also to strengthen security?


Keith Drazek: Thank you very much for the translation. So, good question. Maybe we can take that offline, since we have a couple of others in queue. So, is that Andrew again?


Andrew Campling: It is. I’ll be real quick. Two very quick points. Firstly, ICANN have done some great work to tighten up the contracts to address some of the DNS abuse issues. The real gap here is the lack of action by some of the ccTLDs. So, how do we get governments to also step forward to address this? So, maybe this is the right forum for that, as some of them are here. despite the good work in ICANN, the definition of DNS abuse that ICANN uses is incredibly narrow. And for example, that doesn’t address things like CSAM, although it does cover phishing. So, how can we get more work done to broaden the definition so it has even more impact than it already has?


Keith Drazek: So, thank you very much, Andrew. I can respond to that very briefly. As far as the the definition of DNS abuse, one of the bright lines is when you get into content-related matters, ICANN’s bylaws prohibit it from getting involved into content. So, the definition of DNS abuse is relatively narrow by necessity of ICANN’s bylaws. But there are other venues for discussing content-related harms that are sort of being discussed and developed. And Bertrand, I’m going to turn to you as the shepherd of the Internet Infrastructure Forum. That’s one of the areas where some of these content-related discussions are going to take place. Thanks.


Bertrand Lachapelle: Yeah, thank you. Thank you, Keith. So, in a nutshell, I’m Bertrand Lachapelle. I’m the Executive Director of the Internet and Jurisdiction Policy Network. As Keith mentioned, we have been asked to organize a space to address a certain number of the abuses that do not, that cannot be addressed within the ICANN environment and also to engage other actors than just the DNS operators. I want just to make, and this is the Internet Infrastructure Forum, which is a new thing that started basically in February this year. I want to very quickly mention, in light of what you’ve been saying, this whole thing is a speed and scale challenge and it’s a data challenge. It’s a data sharing challenge. The second thing is that scams, frauds and so on, there is a concept that is evolving that we’ve been discussing in the IIF, which is the notion of theft by deception. This is a category of problems that require or would really benefit from Coordinated Action by the different actors along the stack. The next thing is what I love about what has been presented here, what we do with the IIF, is that these are bottom-up, spontaneous self-organizations, just like the ITF emerged, just like the other organizations emerged. This is multi-stakeholder, bottom-up initiatives in action. It actually is what is needed because the governments are hobbled by the jurisdictional challenges that prevent them from addressing cross-border issues. And the last thing is what is really interesting is that we see the emergence, and Graeme was here. Lucien, you’re talking about what you’re doing with the signal exchange. There are layers here. The IIF is a space for the discussion of what could be done by the different actors. We see the emergence of new intermediaries that handle the abuse workflow problem, management, and what you’re doing is contributing to the platforms for exchanging signals. And I think this is building the ecosystem that, at last, will allow later on to engage law enforcement and other actors so that the whole number of actors can, in a network fashion, address those abuses.


Keith Drazek: Thank you very much, Bertrand. And with that, we are two-plus minutes over time, so I think we probably need to move to wrap up. I just want to say thank you all very much. Thanks to the panelists. Thanks to everybody online. Thanks to you in the room. And we look forward to carrying this on. I wish we had another hour, but we need to close the session. So thank you very much.


B

Benoit Ampeau

Speech speed

141 words per minute

Speech length

1000 words

Speech time

423 seconds

Trust is essential for digital identities and blockchain integration presents unique governance and standardization challenges

Explanation

Trust is fundamental for the security of digital identities, and without it, users and businesses cannot operate effectively online. Integrating blockchain into existing DNS infrastructures raises questions about governance and standardizations, as global consistency and trust of all stakeholders in DNS have yet to be built for blockchain.


Evidence

AFNIC has been studying trust in DNS for many years and published a report on blockchain potentially replacing DNS, with ongoing evaluation of security levels in blockchain identifier systems


Major discussion point

Blockchain Identifiers and DNS Integration


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Agreed with

– Swapneel Sheth
– Participant

Agreed on

Integration rather than replacement of DNS is the preferred approach for blockchain technologies


Disagreed with

– Participant
– Edmund Chung

Disagreed on

Value of blockchain enhancement to existing DNS security


Name collisions exist between blockchain identifiers and existing DNS domains, creating security risks

Explanation

The DNS RF study revealed collisions between blockchain identifiers and domain names, where the same identifier can be allocated by more than one blockchain, leading to duplication. This creates security issues similar to the problems encountered in the 2012 GTLD round with internal corporate suffixes.


Evidence

Examples include TLDs like .wallet and .crypto posing security issues, and findings of three providers with direct conflicts – one with eight conflicts, another with four, and one with one conflict with existing gTLDs and ccTLDs


Major discussion point

Blockchain Identifiers and DNS Integration


Topics

Infrastructure | Cybersecurity | Legal and regulatory


S

Swapneel Sheth

Speech speed

132 words per minute

Speech length

995 words

Speech time

451 seconds

DNS integrations with blockchain applications have potential but require responsible implementation to avoid security inconsistencies

Explanation

While there’s interest in using DNS domain names for blockchain applications like cryptocurrency transactions, these integrations come with challenges around domain transfers, expiration, and security risks. Without coordination, these systems will fall out of sync and undermine trust built in DNS over decades.


Evidence

Examples include blockchain wallets using long alphanumeric strings that are human-unfriendly, and the emergence of dozens of alternative namespaces trying to make blockchain interactions easier


Major discussion point

Blockchain Identifiers and DNS Integration


Topics

Infrastructure | Cybersecurity | Economic


Agreed with

– Benoit Ampeau
– Participant

Agreed on

Integration rather than replacement of DNS is the preferred approach for blockchain technologies


Multi-stakeholder collaboration is needed to develop standards and best practices for responsible DNS integrations

Explanation

The DNS community has proven resilient and adaptable with well-defined standards for transparency, control, and domain lifecycle management. These principles should inform how new blockchain integrations are built, requiring collective collaboration to preserve internet values of interoperability, trust, and collective ownership.


Evidence

VeriSign has published research papers and measurement studies on SSR issues in DNS integrations and is working with the community on standards development


Major discussion point

Blockchain Identifiers and DNS Integration


Topics

Infrastructure | Legal and regulatory | Cybersecurity


Agreed with

– Emily Taylor
– Participant
– Rima Amin

Agreed on

Multi-stakeholder collaboration is essential for addressing DNS challenges


Web3 community shows interest in responsible integration through collaborative draft development

Explanation

There is sufficient interest from the Web3 community to integrate responsibly with DNS, as evidenced by collaborative work on standards. A draft on responsible DNS integration considerations was recently adopted by the DNSOP Working Group at IETF.


Evidence

Co-authors include representatives from ENS (Ethereum Name Service) and Blue Sky (decentralized social media namespace), and there are ongoing hackathons and PQ DNSSEC side meetings at IETF


Major discussion point

Blockchain Identifiers and DNS Integration


Topics

Infrastructure | Digital standards | Legal and regulatory


P

Participant

Speech speed

180 words per minute

Speech length

1042 words

Speech time

345 seconds

Integration rather than replacement of DNS is crucial, with blockchain enhancing security as a secondary layer

Explanation

The critical choice is whether to integrate blockchain systems into the global domain name system or watch infrastructure fragment dangerously. Blockchain enhancement would build on top of existing DNSSEC as a secondary security layer, similar to quantum-resistant defense processes.


Evidence

Reference to GDC principles 2, 3, 4, and 5 for integrating cryptographic identity into DNS queries and enabling federated AI systems to detect fraud in real-time


Major discussion point

Blockchain Identifiers and DNS Integration


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Agreed with

– Benoit Ampeau
– Swapneel Sheth

Agreed on

Integration rather than replacement of DNS is the preferred approach for blockchain technologies


Disagreed with

– Benoit Ampeau
– Edmund Chung

Disagreed on

Value of blockchain enhancement to existing DNS security


Multi-stakeholder processes within existing frameworks like ICANN are the only way forward for successful integration

Explanation

No single government, private entity, or civil society group can solve blockchain-DNS integration challenges alone. This requires multi-stakeholder engagement within existing frameworks, particularly ICANN and the GAC, to ensure successful integration in the next round.


Evidence

Reference to the upcoming ICANN new gTLD round in 2026 and the growing Web3 industry interest in applying for gTLDs


Major discussion point

Blockchain Identifiers and DNS Integration


Topics

Legal and regulatory | Infrastructure | Digital standards


Agreed with

– Emily Taylor
– Swapneel Sheth
– Rima Amin

Agreed on

Multi-stakeholder collaboration is essential for addressing DNS challenges


E

Edmund Chung

Speech speed

119 words per minute

Speech length

137 words

Speech time

68 seconds

DNSSEC already provides cryptographic validation, questioning the added value of blockchain for DNS security

Explanation

DNSSEC security extensions already provide cryptographic validation to prevent tampering with DNS responses. The question arises about how blockchain would add value beyond existing DNSSEC capabilities, and whether blockchain might be useful for registration data management.


Evidence

Mention of blockchain’s potential utility for registration data, domain transfers, ownership transfers, and authentication due to blockchain’s inherent characteristics


Major discussion point

Blockchain Identifiers and DNS Integration


Topics

Infrastructure | Cybersecurity | Digital standards


Disagreed with

– Participant
– Benoit Ampeau

Disagreed on

Value of blockchain enhancement to existing DNS security


A

Andrew Campling

Speech speed

148 words per minute

Speech length

333 words

Speech time

134 seconds

Web3 naming schemes lack mature governance structures and could benefit from DNS governance lessons

Explanation

Some Web3 naming schemes are designed with no governance as a feature, while others have immature governance simply due to their early stage of evolution. There are valuable governance lessons from the DNS approach that could be applied to these systems.


Evidence

Concerns about compute and environmental costs of doing cryptographic validation twice, and the need for caution in that direction


Major discussion point

Blockchain Identifiers and DNS Integration


Topics

Legal and regulatory | Infrastructure | Digital standards


Government action is needed to address DNS abuse gaps in ccTLD operations

Explanation

While ICANN has done great work tightening contracts to address DNS abuse issues, there’s a real gap in the lack of action by some ccTLDs. Governments need to step forward to address this gap in DNS abuse mitigation.


Major discussion point

Online Harm Mitigation and DNS Abuse


Topics

Legal and regulatory | Cybersecurity | Infrastructure


Disagreed with

– Keith Drazek

Disagreed on

Scope of DNS abuse definition and responsibility


H

Hilde Thunem

Speech speed

158 words per minute

Speech length

1119 words

Speech time

423 seconds

Identity verification requirements and domain limits create friction for scammers while maintaining accessibility for legitimate users

Explanation

The Norwegian .no domain requires identity verification through official registers and limits domain registrations (5 for individuals, 100 for companies). These requirements irritate scammers who need to either identify themselves or steal credentials, and get limited domain quantities, while not creating excessive burden for legitimate users.


Evidence

Requirements include Norwegian organization numbers or national identity numbers, verification through official registers, and the example that Santa Claus cannot get a .no domain because he doesn’t exist in official registers


Major discussion point

Online Harm Mitigation and DNS Abuse


Topics

Legal and regulatory | Cybersecurity | Digital identities


Legal frameworks with clear responsibilities and due process are essential for addressing compromised domains

Explanation

Norwegian law establishes that domain holders are responsible for domain use, with clear legal processes for authorities to act when domains are used for crimes. This includes police powers to seize domains and Consumer Protection Agency authority to require domain deletion or transfer, all with due process requirements.


Evidence

Supreme Court principle from 2009, revised Electronic Communication Act providing legal clarity, and distinction between compromised domains (where holders are victims) versus maliciously registered domains


Major discussion point

Online Harm Mitigation and DNS Abuse


Topics

Legal and regulatory | Cybersecurity | Jurisdiction


L

Lucien Taylor

Speech speed

146 words per minute

Speech length

1186 words

Speech time

485 seconds

Cross-sectoral international signal sharing is needed to combat the rising scale of cybercrime

Explanation

The Global Signal Exchange was created in response to multi-stakeholder community demand for a missing organization in the fight against scams and fraud. Cybercrime is rising relentlessly, and criminals are moving faster than defenders, exploiting cross-border legislative tensions and sharing bad intelligence better than legitimate actors share good intelligence.


Evidence

160 organizations in accreditation pipeline, commitment from big tech including Meta and Google, negotiations with governments and law enforcement, and examples of relentless scam messages on family phones


Major discussion point

Online Harm Mitigation and DNS Abuse


Topics

Cybersecurity | Legal and regulatory | Jurisdiction


Real-time threat detection and improved feedback loops are necessary to reduce time-to-mitigation for scams

Explanation

The GSE focuses on ‘Quick Factors’ – quantity, immediacy, and quality of threat signals. Current systems take an average of four days between detection and mitigation, which needs to be reduced through real-time detection, federated AI models, and improved feedback loops between signal providers and mitigators.


Evidence

Growth from 40 million to 270 million threat signals (rising by 1 million daily), comparison with Action Fraud receiving only 30,000 monthly signals, and development of confidence scores and overlap detection


Major discussion point

Online Harm Mitigation and DNS Abuse


Topics

Cybersecurity | Infrastructure | Economic


Agreed with

– Graeme Bunton
– Rima Amin

Agreed on

Proactive and automated approaches are necessary to address the scale of online abuse


R

Rima Amin

Speech speed

164 words per minute

Speech length

980 words

Speech time

356 seconds

DNS abuse accelerates harm across multiple threat areas including domain spoofing, cyber-squatting, and deceptive redirects

Explanation

Meta’s security policy team sees DNS abuse accelerating harm across influence operations, cyber espionage, hacking, and fraud/scams. Key areas include domain spoofing that resembles legitimate domains, cyber-squatting that impersonates businesses and brands, and deceptive redirects that route users to malicious websites.


Evidence

Examples include compromised accounts being used to manage business profiles, link aggregators and shorteners being used to evade detection, and taking down 9,000 URLs impersonating Meta brands


Major discussion point

Online Harm Mitigation and DNS Abuse


Topics

Cybersecurity | Consumer protection | Digital identities


Agreed with

– Lucien Taylor
– Graeme Bunton

Agreed on

Proactive and automated approaches are necessary to address the scale of online abuse


Global solutions and consistent approaches are needed rather than fragmented country-specific responses

Explanation

Due to the global nature of the internet, fragmented country-specific approaches to DNS abuse are insufficient. Global solutions with consistent approaches would be more effective than the current situation where many countries try to tackle abuse in their own way.


Evidence

Meta’s efforts to share intelligence through existing industry signal sharing programs and participation in GSE for cross-sector sharing


Major discussion point

Online Harm Mitigation and DNS Abuse


Topics

Legal and regulatory | Jurisdiction | Cybersecurity


Agreed with

– Emily Taylor
– Swapneel Sheth
– Participant

Agreed on

Multi-stakeholder collaboration is essential for addressing DNS challenges


G

Graeme Bunton

Speech speed

186 words per minute

Speech length

1108 words

Speech time

356 seconds

DNS abuse is concentrated among a small number of registrars, making the problem manageable through targeted action

Explanation

NetBeacon’s measurement data shows that 95% of malicious domains belong to about 50 registrars or less, with 80% belonging to less than 20 registrars. This concentration means the problem space is manageable and collective action can be effective.


Evidence

Three years of public data from NetBeacon Map showing concentration patterns, and observation of abuse concentrating in smaller numbers of highly abused registrars following ICANN contract amendments


Major discussion point

Online Harm Mitigation and DNS Abuse


Topics

Cybersecurity | Infrastructure | Legal and regulatory


Proactive processes and automation are essential given the scale of abuse that reactive reporting cannot handle

Explanation

NetBeacon processes around 20,000 abuse reports monthly, with peak days of 6,000-7,000 reports to individual registries or registrars. This scale demonstrates that reactive abuse reporting alone is insufficient and proactive, automated processes are necessary.


Evidence

NetBeacon Reporter handling 20,000 monthly reports, standardizing and enriching reports, distributing to multiple internet stack layers, and receiving feedback on report quality and response times


Major discussion point

Online Harm Mitigation and DNS Abuse


Topics

Cybersecurity | Infrastructure | Digital standards


Agreed with

– Lucien Taylor
– Rima Amin

Agreed on

Proactive and automated approaches are necessary to address the scale of online abuse


K

Keith Drazek

Speech speed

165 words per minute

Speech length

2362 words

Speech time

856 seconds

ICANN’s DNS abuse definition is necessarily narrow due to content restrictions in bylaws

Explanation

ICANN’s definition of DNS abuse is relatively narrow by necessity because ICANN’s bylaws prohibit involvement in content-related matters. When discussions move into content-related harms, there’s a bright line that ICANN cannot cross.


Evidence

Reference to other venues like the Internet Infrastructure Forum where content-related discussions can take place outside ICANN’s constraints


Major discussion point

Online Harm Mitigation and DNS Abuse


Topics

Legal and regulatory | Content policy | Infrastructure


Disagreed with

– Andrew Campling

Disagreed on

Scope of DNS abuse definition and responsibility


B

Bertrand Lachapelle

Speech speed

159 words per minute

Speech length

355 words

Speech time

133 seconds

Coordinated action across the internet stack is needed for ‘theft by deception’ categories of abuse

Explanation

The Internet Infrastructure Forum addresses abuses that cannot be handled within ICANN’s environment by engaging actors beyond DNS operators. Scams and fraud represent ‘theft by deception’ categories that require coordinated action by different actors along the internet stack.


Evidence

The IIF as a bottom-up, multi-stakeholder initiative that started in February, emergence of new intermediaries handling abuse workflow management, and platforms for exchanging signals


Major discussion point

Online Harm Mitigation and DNS Abuse


Topics

Legal and regulatory | Cybersecurity | Jurisdiction


E

Emily Taylor

Speech speed

133 words per minute

Speech length

346 words

Speech time

155 seconds

Multi-stakeholder coordination is essential for addressing complex DNS evolution challenges

Explanation

The workshop addresses issues requiring the domain name system to evolve to cope with emerging challenges like blockchain identifiers and online harms mitigation. Each of these issues is complex in nature and requires coordination of multiple stakeholders to be effectively addressed.


Evidence

The workshop was organized jointly by the Dynamic Coalition on DNS Issues and the Dynamic Coalition on Data and Trust, bringing together various stakeholders


Major discussion point

Blockchain Identifiers and DNS Integration


Topics

Infrastructure | Legal and regulatory | Digital standards


Agreed with

– Swapneel Sheth
– Participant
– Rima Amin

Agreed on

Multi-stakeholder collaboration is essential for addressing DNS challenges


A

Audience

Speech speed

130 words per minute

Speech length

114 words

Speech time

52 seconds

Government institutions should adopt national domain extensions to strengthen digital identity and cybersecurity

Explanation

A participant from Senegal highlighted that despite having a .sn domain since the 1980s, government institutions still use generic domains like .gmail for official communications, which creates cybersecurity risks. The speaker asked for expert recommendations on how to encourage government use of national domains to establish proper digital identity and strengthen security.


Evidence

Senegal has had DNS since the 1980s but institutions use .gmail instead of .sn, and the State of Senegal was recently victim of two cyber attacks


Major discussion point

Online Harm Mitigation and DNS Abuse


Topics

Digital identities | Cybersecurity | Legal and regulatory


Agreements

Agreement points

Multi-stakeholder collaboration is essential for addressing DNS challenges

Speakers

– Emily Taylor
– Swapneel Sheth
– Participant
– Rima Amin

Arguments

Multi-stakeholder coordination is essential for addressing complex DNS evolution challenges


Multi-stakeholder collaboration is needed to develop standards and best practices for responsible DNS integrations


Multi-stakeholder processes within existing frameworks like ICANN are the only way forward for successful integration


Global solutions and consistent approaches are needed rather than fragmented country-specific responses


Summary

All speakers agree that the complex challenges facing DNS – whether from blockchain integration or online harms – require coordinated multi-stakeholder approaches rather than fragmented individual efforts


Topics

Infrastructure | Legal and regulatory | Digital standards


Integration rather than replacement of DNS is the preferred approach for blockchain technologies

Speakers

– Benoit Ampeau
– Swapneel Sheth
– Participant

Arguments

Trust is essential for digital identities and blockchain integration presents unique governance and standardization challenges


DNS integrations with blockchain applications have potential but require responsible implementation to avoid security inconsistencies


Integration rather than replacement of DNS is crucial, with blockchain enhancing security as a secondary layer


Summary

Speakers consistently advocate for responsible integration of blockchain technologies with existing DNS infrastructure rather than attempting to replace the established system


Topics

Infrastructure | Cybersecurity | Legal and regulatory


Proactive and automated approaches are necessary to address the scale of online abuse

Speakers

– Lucien Taylor
– Graeme Bunton
– Rima Amin

Arguments

Real-time threat detection and improved feedback loops are necessary to reduce time-to-mitigation for scams


Proactive processes and automation are essential given the scale of abuse that reactive reporting cannot handle


DNS abuse accelerates harm across multiple threat areas including domain spoofing, cyber-squatting, and deceptive redirects


Summary

All speakers working on abuse mitigation agree that the current scale of online abuse requires moving beyond reactive approaches to proactive, automated, and real-time detection and response systems


Topics

Cybersecurity | Infrastructure | Digital standards


Similar viewpoints

These speakers share skepticism about the technical necessity and governance maturity of blockchain naming systems, emphasizing that existing DNS security mechanisms may already address many concerns

Speakers

– Benoit Ampeau
– Edmund Chung
– Andrew Campling

Arguments

Name collisions exist between blockchain identifiers and existing DNS domains, creating security risks


DNSSEC already provides cryptographic validation, questioning the added value of blockchain for DNS security


Web3 naming schemes lack mature governance structures and could benefit from DNS governance lessons


Topics

Infrastructure | Cybersecurity | Digital standards


Both speakers demonstrate that DNS abuse problems are manageable through targeted interventions – whether through registration requirements or focusing on high-abuse registrars

Speakers

– Hilde Thunem
– Graeme Bunton

Arguments

Identity verification requirements and domain limits create friction for scammers while maintaining accessibility for legitimate users


DNS abuse is concentrated among a small number of registrars, making the problem manageable through targeted action


Topics

Legal and regulatory | Cybersecurity | Infrastructure


These speakers advocate for coordinated, cross-sector approaches to combat online abuse, emphasizing that fragmented national or sector-specific responses are insufficient

Speakers

– Lucien Taylor
– Rima Amin
– Bertrand Lachapelle

Arguments

Cross-sectoral international signal sharing is needed to combat the rising scale of cybercrime


Global solutions and consistent approaches are needed rather than fragmented country-specific responses


Coordinated action across the internet stack is needed for ‘theft by deception’ categories of abuse


Topics

Legal and regulatory | Jurisdiction | Cybersecurity


Unexpected consensus

Web3 community willingness to engage in responsible integration

Speakers

– Swapneel Sheth
– Participant

Arguments

Web3 community shows interest in responsible integration through collaborative draft development


Multi-stakeholder processes within existing frameworks like ICANN are the only way forward for successful integration


Explanation

Despite potential tensions between traditional DNS governance and decentralized blockchain philosophies, there appears to be unexpected willingness from Web3 communities to work within existing multi-stakeholder frameworks and develop responsible integration standards


Topics

Infrastructure | Digital standards | Legal and regulatory


Concentration of DNS abuse making the problem manageable

Speakers

– Graeme Bunton
– Hilde Thunem

Arguments

DNS abuse is concentrated among a small number of registrars, making the problem manageable through targeted action


Identity verification requirements and domain limits create friction for scammers while maintaining accessibility for legitimate users


Explanation

Rather than DNS abuse being an overwhelming distributed problem, there’s consensus that it’s actually concentrated and manageable through targeted interventions, which is more optimistic than might be expected


Topics

Cybersecurity | Infrastructure | Legal and regulatory


Overall assessment

Summary

The speakers demonstrate strong consensus on the need for multi-stakeholder collaboration, responsible integration of new technologies with existing DNS infrastructure, and coordinated approaches to combat online abuse. There’s agreement that both blockchain integration and abuse mitigation require proactive, systematic approaches rather than fragmented responses.


Consensus level

High level of consensus with significant implications for policy development. The agreement suggests that the DNS community is aligned on fundamental principles of responsible innovation and coordinated abuse mitigation, providing a strong foundation for developing concrete standards and implementation frameworks. The consensus spans technical, policy, and operational perspectives, indicating mature understanding of the challenges and viable paths forward.


Differences

Different viewpoints

Value of blockchain enhancement to existing DNS security

Speakers

– Participant
– Benoit Ampeau
– Edmund Chung

Arguments

Integration rather than replacement of DNS is crucial, with blockchain enhancing security as a secondary layer


Trust is essential for digital identities and blockchain integration presents unique governance and standardization challenges


DNSSEC already provides cryptographic validation, questioning the added value of blockchain for DNS security


Summary

While the Participant advocates for blockchain as a secondary security layer on top of DNSSEC, Edmund Chung questions whether blockchain adds value beyond existing DNSSEC capabilities, and Benoit Ampeau emphasizes the governance and standardization challenges that blockchain integration presents.


Topics

Infrastructure | Cybersecurity | Digital standards


Scope of DNS abuse definition and responsibility

Speakers

– Keith Drazek
– Andrew Campling

Arguments

ICANN’s DNS abuse definition is necessarily narrow due to content restrictions in bylaws


Government action is needed to address DNS abuse gaps in ccTLD operations


Summary

Keith Drazek defends ICANN’s narrow definition of DNS abuse as necessary due to bylaw restrictions on content matters, while Andrew Campling argues for broadening the definition and criticizes the lack of action by ccTLDs, suggesting governments should step forward.


Topics

Legal and regulatory | Cybersecurity | Infrastructure


Unexpected differences

Environmental and computational costs of dual cryptographic validation

Speakers

– Andrew Campling
– Participant

Arguments

Web3 naming schemes lack mature governance structures and could benefit from DNS governance lessons


Integration rather than replacement of DNS is crucial, with blockchain enhancing security as a secondary layer


Explanation

Andrew Campling raised an unexpected concern about the environmental and computational costs of doing cryptographic validation twice (both DNSSEC and blockchain), which wasn’t anticipated in a discussion primarily focused on technical integration challenges. This practical sustainability concern contrasts with the Participant’s focus on security enhancement benefits.


Topics

Infrastructure | Cybersecurity | Development


Overall assessment

Summary

The discussion revealed relatively low levels of fundamental disagreement, with most speakers sharing common goals around maintaining DNS security and stability while enabling innovation. The main disagreements centered on technical approaches (blockchain value-add vs. existing DNSSEC) and governance scope (narrow vs. broad DNS abuse definitions).


Disagreement level

Low to moderate disagreement level. The speakers generally aligned on core principles but differed on implementation approaches and technical solutions. This suggests that while there are legitimate concerns to address, the multi-stakeholder community has sufficient common ground to work toward collaborative solutions for both blockchain integration and DNS abuse mitigation.


Partial agreements

Partial agreements

Similar viewpoints

These speakers share skepticism about the technical necessity and governance maturity of blockchain naming systems, emphasizing that existing DNS security mechanisms may already address many concerns

Speakers

– Benoit Ampeau
– Edmund Chung
– Andrew Campling

Arguments

Name collisions exist between blockchain identifiers and existing DNS domains, creating security risks


DNSSEC already provides cryptographic validation, questioning the added value of blockchain for DNS security


Web3 naming schemes lack mature governance structures and could benefit from DNS governance lessons


Topics

Infrastructure | Cybersecurity | Digital standards


Both speakers demonstrate that DNS abuse problems are manageable through targeted interventions – whether through registration requirements or focusing on high-abuse registrars

Speakers

– Hilde Thunem
– Graeme Bunton

Arguments

Identity verification requirements and domain limits create friction for scammers while maintaining accessibility for legitimate users


DNS abuse is concentrated among a small number of registrars, making the problem manageable through targeted action


Topics

Legal and regulatory | Cybersecurity | Infrastructure


These speakers advocate for coordinated, cross-sector approaches to combat online abuse, emphasizing that fragmented national or sector-specific responses are insufficient

Speakers

– Lucien Taylor
– Rima Amin
– Bertrand Lachapelle

Arguments

Cross-sectoral international signal sharing is needed to combat the rising scale of cybercrime


Global solutions and consistent approaches are needed rather than fragmented country-specific responses


Coordinated action across the internet stack is needed for ‘theft by deception’ categories of abuse


Topics

Legal and regulatory | Jurisdiction | Cybersecurity


Takeaways

Key takeaways

Trust is fundamental for digital identities and DNS systems, requiring careful integration of new technologies like blockchain rather than replacement


Name collisions between blockchain identifiers and existing DNS domains create security risks that need proactive management


Multi-stakeholder collaboration is essential for developing responsible DNS-blockchain integration standards and addressing online harms


DNS abuse is concentrated among a small number of registrars (95% from ~50 registrars), making targeted action feasible


Cross-sectoral international signal sharing and real-time threat detection are critical for combating the rising scale of cybercrime


Identity verification requirements and domain limits can create effective friction for scammers while maintaining legitimate user access


Proactive processes and automation are necessary given the scale of abuse that reactive reporting alone cannot handle


Legal frameworks with clear responsibilities and due process are essential for addressing compromised domains


The internet infrastructure requires coordinated action across the entire stack to effectively combat ‘theft by deception’ categories of abuse


Resolutions and action items

Continue multi-stakeholder engagement through Dynamic Coalitions on DNS Issues and Data and Trust


Develop risk assessment framework for blockchain identifier systems (AFNIC and DNSRF collaboration)


Advance responsible DNS integration standards through IETF DNSOP Working Group drafts


Expand Global Signal Exchange participation with 160 organizations in accreditation pipeline


Utilize Internet Infrastructure Forum as a space for discussing cross-border abuse coordination


Engage ICANN GAC and other stakeholders in next GTLD round to address blockchain integration concerns


Continue development of NetBeacon tools for abuse reporting and mitigation


Promote post-quantum DNSSEC development through hackathons and IETF meetings


Unresolved issues

How to ensure blockchain community participation in responsible integration efforts and what incentives exist for their engagement


Whether blockchain provides meaningful security enhancement over existing DNSSEC given computational and environmental costs


How to address DNS abuse gaps in ccTLD operations and encourage government action


How to broaden ICANN’s narrow DNS abuse definition while respecting content restrictions in bylaws


How to scale abuse mitigation processes to handle millions of daily threat signals effectively


How to develop global consistent approaches for DNS abuse rather than fragmented country-specific responses


How to improve feedback loops and evidence quality in threat intelligence sharing


How to encourage government institutions to use national ccTLDs for digital identity and security purposes


Suggested compromises

Integration rather than replacement of DNS with blockchain technologies, using blockchain as a secondary security layer


Responsible DNS integration that preserves existing DNS governance while enabling new blockchain use cases


Behavior-based trust systems rather than purely identity-based systems for platform access


Coordinated multi-layer approach involving registries, registrars, hosting providers, and content delivery networks


Bottom-up, multi-stakeholder self-organization initiatives to address cross-border jurisdictional challenges


Balanced approach between creating friction for bad actors while maintaining accessibility for legitimate users


Combination of reactive abuse reporting with proactive automated detection and prevention systems


Thought provoking comments

The critical choice I want to highlight is that we need to answer the question… do we integrate this blockchain system into the global domain name system, or do we watch our infrastructure fragment in dangerous ways in which fraud will likely just intensify?

Speaker

Esther Yarmitsky


Reason

This comment reframed the entire blockchain-DNS discussion from a technical implementation question to a fundamental strategic choice about internet infrastructure integrity. It elevated the conversation beyond technical details to existential concerns about internet fragmentation and security.


Impact

This shifted the discussion from ‘how to integrate’ to ‘why we must integrate responsibly.’ It connected the blockchain naming discussion directly to fraud prevention, creating a bridge between the two main topics of the workshop and emphasizing urgency in decision-making.


DNS integrations come with their own set of challenges. For example, how do we think about a domain name that’s transferred or expires after the domain name has been integrated into the blockchain application? How do we avoid risks… with inconsistencies, with the security concerns that come along when the same names are used across multiple systems?

Speaker

Swapneel Sheth


Reason

This comment introduced concrete technical challenges that hadn’t been fully articulated, moving beyond theoretical concerns to practical implementation issues. It highlighted the lifecycle management problems that could undermine trust in both systems.


Impact

This grounded the discussion in practical realities and led to more detailed technical exchanges about DNSSEC, governance models, and the need for standards. It prompted Andrew Campling’s intervention about governance lessons from DNS that could benefit blockchain systems.


The criminals are moving faster than us. They’re exploiting cross-border legislative tensions and sharing bad things between each other better than we share things.

Speaker

Lucien Taylor


Reason

This stark observation highlighted a fundamental asymmetry in the fight against online harms – that criminal networks are more agile and collaborative than legitimate defense systems. It challenged the assumption that current approaches are adequate.


Impact

This comment shifted the tone from technical solutions to strategic urgency, emphasizing the need for speed and coordination. It provided context for why initiatives like the Global Signal Exchange are necessary and influenced subsequent discussions about real-time threat detection and cross-sector collaboration.


95% of the malicious domains that we see belong to about 50 registrars or less, 80% belongs to less than 20… The problem space is not huge. There’s differences between the registrars and TLDs also in that data. But we can sort of wrap our collective arms around the scope of that problem.

Speaker

Graeme Bunton


Reason

This data-driven insight fundamentally reframed the scale of the DNS abuse problem from seemingly overwhelming to manageable, while also pinpointing where efforts should be concentrated. It provided concrete evidence that targeted interventions could be highly effective.


Impact

This shifted the discussion from broad, systemic concerns to focused, actionable solutions. It influenced Andrew Campling’s follow-up question about ccTLD accountability and reinforced the importance of data-driven approaches that other speakers had mentioned.


The real gap here is the lack of action by some of the ccTLDs. So, how do we get governments to also step forward to address this? So, maybe this is the right forum for that, as some of them are here.

Speaker

Andrew Campling


Reason

This comment identified a critical governance gap in the multi-stakeholder approach to DNS abuse, pointing out that while ICANN has tightened gTLD contracts, ccTLDs operate under different governance models that may not be addressing abuse adequately.


Impact

This intervention highlighted the limitations of current policy approaches and the need for government engagement, connecting technical solutions to policy and governance challenges. It demonstrated how the multi-stakeholder model itself has gaps that need addressing.


This whole thing is a speed and scale challenge and it’s a data challenge. It’s a data sharing challenge… we see the emergence of new intermediaries that handle the abuse workflow problem, management… this is building the ecosystem that, at last, will allow later on to engage law enforcement and other actors.

Speaker

Bertrand Lachapelle


Reason

This synthesized the entire discussion by identifying the core challenges (speed, scale, data sharing) and recognizing the emergence of new institutional forms to address these challenges. It provided a systems-level view of how various initiatives fit together.


Impact

This comment served as a capstone that tied together the various threads of discussion, showing how technical solutions, policy initiatives, and new organizational forms are part of an evolving ecosystem response to online harms.


Overall assessment

These key comments fundamentally shaped the discussion by elevating it from technical implementation details to strategic infrastructure decisions, introducing concrete data that reframed problem scope, and highlighting critical governance gaps. Esther’s framing of integration versus fragmentation set the stakes, while Swapneel’s technical challenges grounded the discussion in practical realities. Lucien’s observation about criminal agility created urgency, Graeme’s data provided actionable focus, Andrew’s governance critique exposed policy gaps, and Bertrand’s synthesis showed how various initiatives form an emerging ecosystem response. Together, these interventions transformed what could have been separate technical and policy discussions into a coherent analysis of how internet infrastructure must evolve to address emerging threats while maintaining trust and stability.


Follow-up questions

How do we ensure responsible integration happens in a multi-stakeholder manner?

Speaker

Carolina from Oxhill


Explanation

This addresses the governance challenge of coordinating multiple stakeholders in blockchain-DNS integration while maintaining security and trust


How to get the blockchain community to participate – what incentives exist for them to participate?

Speaker

Carolina from Oxhill


Explanation

Understanding motivation and incentive structures is crucial for successful multi-stakeholder engagement in responsible DNS integration


How can we get more work done to broaden the definition of DNS abuse so it has even more impact?

Speaker

Andrew Campling


Explanation

Current ICANN definition of DNS abuse is narrow and doesn’t address issues like CSAM, limiting the scope of mitigation efforts


How do we get governments to step forward to address DNS abuse issues, particularly regarding ccTLD action?

Speaker

Andrew Campling


Explanation

There’s a gap in enforcement where some ccTLDs are not taking adequate action against DNS abuse, requiring government intervention


What would experts propose to help states use their ccTLD in administrative services for digital identity and security strengthening?

Speaker

Yuv from Senegal


Explanation

Many government institutions use generic domains instead of their national ccTLD, creating cybersecurity risks and undermining digital identity


Will blockchain identifier solution providers engage in responsible integration with DNS?

Speaker

Benoit Ampeau


Explanation

Uncertainty exists about whether blockchain providers will participate in responsible integration frameworks being developed


How can federated AI systems be effectively integrated to detect fraud in real-time on top of existing DNS structures?

Speaker

Esther Yarmitsky


Explanation

This represents a technical challenge for implementing AI-powered fraud detection while maintaining DNS stability and performance


How can we develop improved, reliable, and accessible proactive processes for DNS abuse mitigation?

Speaker

Graeme Bunton


Explanation

Current reactive approaches don’t scale effectively – proactive measures are needed to get ahead of abuse at the scale it’s occurring


How can we improve the feedback loop mechanism for threat intelligence signals?

Speaker

Lucien Taylor


Explanation

Developing effective feedback loops is challenging but essential for improving signal quality and creating evidence-based data for enforcement


How can we achieve global consistency in approaches to online harm mitigation across different jurisdictions?

Speaker

Rima Amin


Explanation

Different countries are tackling online harms in their own ways – a consistent global approach would be more effective given the internet’s borderless nature


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Lightning Talk #209 Safeguarding Diverse Independent NeWS Media in Policy

Lightning Talk #209 Safeguarding Diverse Independent NeWS Media in Policy

Session at a glance

Summary

Amy Mitchell from the Center for News Technology and Innovation presented a discussion on safeguarding diverse independent news media through policy considerations. She highlighted that society is currently passing more journalism-related laws than ever before, while simultaneously facing challenges in defining what constitutes journalism in the digital age. Mitchell emphasized that 50% of journalists surveyed internationally had experienced some form of government censorship, and global press freedom scores have declined to 1993 levels.


The presentation focused on how well-intentioned policies can have unintended consequences on press independence and journalism viability. Mitchell outlined critical questions that should be explored in any digital policy, including definitional language, oversight authority, diversity protection, public service, and cross-border impacts. She presented findings from three major studies conducted by her organization.


The first study examined 32 “fake news” policies across 31 countries between 2020-2023, finding that most policies created greater risks to journalistic independence than they provided protection. Only seven of these policies actually defined what constitutes fake or illegal content, while 14 placed control directly in government hands. The second study analyzed 23 media remuneration policies designed to provide revenue streams to journalism, revealing dramatic variations in how digital usage and compensation were defined across different jurisdictions.


Mitchell emphasized the importance of considering public perspectives in policy development, noting that the public has a broad definition of journalism producers beyond traditional news organizations. She concluded by advocating for collaborative, data-driven conversations among policymakers, technology companies, media organizations, and civil society to balance technological benefits while mitigating potential harms to independent journalism.


Keypoints

**Major Discussion Points:**


– **Growing Policy Challenges for Journalism**: The discussion highlights how more laws affecting journalism are being passed than ever before, while it’s simultaneously becoming harder to define what constitutes journalism. This is occurring amid declining press freedoms globally, with 50% of surveyed journalists experiencing some form of government censorship.


– **Definitional Problems in Policy Language**: A critical issue identified across policies is the vague or inconsistent definition of key terms like “fake news,” “journalism,” and “illegal content.” Most policies studied (25 out of 32 fake news policies) failed to clearly define these terms, leaving interpretation to government authorities.


– **Government Authority and Control Mechanisms**: The research reveals that many policies place oversight authority directly in government hands, with 14 of 32 fake news policies giving control to central government. This raises concerns about potential misuse of well-intentioned policies for information control.


– **Media Remuneration and Financial Sustainability**: The discussion covers policies aimed at creating revenue streams for struggling journalism industries through digital platform compensation, but notes wide variation in how “usage” and compensation are defined across 23 different policies.


– **Unintended Consequences of Digital Policies**: Even well-intentioned policies designed to protect the information space can inadvertently harm press independence and diversity. The speaker emphasizes the need to consider cross-border impacts and long-term effects, particularly with emerging AI policies.


**Overall Purpose:**


The discussion aims to present research findings on how digital policies worldwide are impacting journalism and press freedom, while proposing a framework of critical questions that policymakers should consider to safeguard independent, diverse media while addressing legitimate policy concerns.


**Overall Tone:**


The tone is academic and analytical, maintaining objectivity while expressing underlying concern about threats to press freedom. Mitchell presents as a researcher sharing findings rather than an advocate, emphasizing data-driven analysis. The tone remains consistent throughout – informative and measured, though with clear implications about the risks facing independent journalism. During the Q&A, the tone becomes slightly more conversational while maintaining the same analytical approach.


Speakers

– **Amy Mitchell**: Director, Center for News Technology and Innovation (CNTI). Has 25 years of experience at Pew Research Center where she helped launch and directed the journalism line of research. Currently leads a global research center focused on enabling independent, sustainable news media, maintaining an open internet, and fostering informed public policy discussions.


– **Audience**: Multiple audience members who asked questions during the Q&A session. Areas of expertise, roles, and titles not specified.


Additional speakers:


None identified beyond those in the speakers names list.


Full session report

# Safeguarding Diverse Independent News Media: Policy Considerations and Global Challenges


## Executive Summary


Amy Mitchell presented a comprehensive analysis of how digital policies worldwide are impacting journalism and press freedom. Drawing from extensive research conducted by her organisation, the Center for News Technology and Innovation (CNTI), Mitchell highlighted the unprecedented challenges facing independent media in an era where more journalism-related laws are being passed than ever before. The discussion revealed alarming trends in global press freedom, with 50% of journalists surveyed having experienced some form of government censorship, and worldwide press freedom scores declining to 1993 levels.


The presentation centred on the critical observation that well-intentioned policies designed to protect the information space often create unintended consequences that harm journalistic independence and diversity. Through analysis of fake news policies, media remuneration frameworks, and emerging AI regulations, Mitchell demonstrated how vague definitional language and inappropriate oversight mechanisms can transform protective policies into tools for information control.


## Background and Research Context


Mitchell began by establishing her background and the context for CNTI’s work. Coming from 25 years at the Pew Research Center where she helped launch the journalism line of research, Mitchell now leads CNTI, an organisation that is “not quite two years, a little over a year and a half now” old. CNTI works in partnership with organisations including the Global Fund for Media Development (GFMD), Online News Association, and several others.


The research presented emerged from collaborative efforts including a Mexico City convening co-sponsored with OEM, where stakeholders from across the journalism ecosystem gathered to examine how digital policies affect independent media. This work addresses the fundamental challenge that society is experiencing an unprecedented volume of legislation affecting journalism, both directly and indirectly, at a time when press freedoms are declining globally.


## Current State of Journalism and Policy Landscape


Mitchell established the gravity of the current situation facing journalism globally. The research revealed that 50% of the journalists surveyed had experienced some form of government censorship within the past year. Global press freedom scores have deteriorated to levels not seen since 1993, creating an environment where policies originally intended for protection are increasingly being used to imprison and control journalists.


The challenge is compounded by the evolving nature of journalism itself. In the digital age, traditional definitions of journalism and news organisations no longer capture the full spectrum of information producers that the public relies upon. This definitional ambiguity creates vulnerabilities in policy frameworks that may inadvertently exclude legitimate journalism whilst failing to address actual threats to information integrity.


Mitchell emphasised that the volume of legislation affecting journalism is unprecedented, with lawmakers worldwide grappling with how to regulate digital spaces without clear understanding of the implications for press freedom and independent media.


## Research Methodology and Key Questions


To address these challenges systematically, Mitchell outlined key questions that CNTI examines when analysing digital policies affecting journalism:


– How policies define crucial terms such as “journalism,” “fake news,” “illegal content,” and “digital usage”


– Who has oversight authority to interpret and enforce policies


– Whether policies adequately protect diverse voices in the media landscape


– How clearly policies articulate their goals for serving public information needs


– The cross-border impacts of national policies


Mitchell stressed the importance of thinking beyond national boundaries when considering policy impacts, as digital policies often have far-reaching effects through international platforms and the tendency for policies to be copied across jurisdictions, sometimes by authoritarian regimes for harmful purposes.


## Analysis of Fake News Policies


One of CNTI’s most significant studies examined 32 fake news policies across 31 countries implemented between 2020 and 2023. The findings revealed troubling patterns that suggest these policies create greater risks to journalistic independence than protection for the information space.


The most striking finding was the widespread failure to define key terms. Only seven of the 32 policies actually defined what constitutes fake or illegal content, leaving interpretation of these crucial concepts to authority figures. This definitional vacuum creates dangerous ambiguity that can be exploited for information control rather than protection.


The research revealed concerning patterns in oversight authority, with 14 of the 32 policies placing control very specifically in government hands. The penalties varied dramatically, with imprisonment terms ranging from less than one month to over three years (with Zimbabwe specifically mentioned for the longest sentences), reflecting inconsistent and often disproportionate regulatory approaches.


Mitchell emphasised that whilst the stated intentions of these policies were often laudable—protecting citizens from harmful misinformation—the practical implementation frequently created tools that could be used to suppress legitimate journalism and dissenting voices.


## Media Remuneration Policy Analysis


The second major study examined 23 media remuneration policies implemented between 2018 and August 2024, designed to create new revenue streams for struggling journalism industries through compensation from digital platforms. These policies, including various US state-level initiatives, represent attempts to address the economic challenges facing traditional media in the digital age.


The analysis showed dramatic variation in how different jurisdictions defined key concepts such as “digital content usage” and compensation criteria. This inconsistency creates confusion for both platforms and media organisations operating across multiple jurisdictions and may lead to uneven outcomes.


Particularly concerning was the finding that these policies inconsistently addressed the diversity of news media, with many appearing to favour large, established operations over smaller, independent outlets. Furthermore, most policies failed to clearly articulate how they would better serve public information needs, risking becoming mere economic transfers rather than tools for improving the information landscape.


## AI Policy Implications


Mitchell’s research also examined emerging artificial intelligence policies and their implications for journalism. While few AI policies directly address journalism, they have significant indirect impacts through their effects on content creation, distribution, and liability frameworks.


The research revealed that AI policies often place liability on users, including journalists, without providing clear definitions of appropriate use or adequate safeguards for legitimate journalistic activities. This creates uncertainty for journalists who wish to benefit from AI technologies whilst avoiding legal risks.


Mitchell emphasised the need to consider how AI policies affect journalists’ ability to harness technological benefits whilst guarding against potential risks, particularly given the cross-border nature of AI technologies.


## Public Perspective and Behaviour


A crucial element of Mitchell’s analysis focused on understanding public perspectives on journalism and information consumption. CNTI conducted surveys in four countries to understand how the public defines journalism and consumes information.


The research revealed that the public has a much broader definition of journalism than traditional policy frameworks typically recognise, including individual journalists working independently, mission-driven content creators, and principle-guided information producers. Both journalists and the public view technology as critically important for news production, gathering, dissemination, and consumption.


Importantly, Mitchell noted that substantial research shows disinformation campaigns don’t have as much impact on what people actually believe as previously thought. Instead, people’s own behaviour and choices about where to seek information appear to be more significant factors in determining what they accept as credible.


## Discussion and Key Exchanges


The presentation generated significant discussion during the question-and-answer session. Key exchanges included:


**Methodological Questions**: An audience member questioned the value of including established autocracies in policy analysis. Mitchell responded by emphasising that autocratic policies matter because they affect real people and can be copied by other countries for harmful purposes, highlighting the interconnected nature of global policy development.


**EU Policy Analysis**: When questioned about including individual EU member states rather than focusing on EU-wide legislation, Mitchell explained that they specifically looked at country-specific fake news policies rather than broader frameworks like the Digital Services Act.


**Alternative Approaches**: Discussion explored focusing on the demand side of disinformation—understanding why people believe and engage with false information—rather than concentrating primarily on supply-side regulation. Mitchell noted that psychological defence approaches, such as those employed by Sweden’s dedicated agency, could offer valuable alternatives to traditional content moderation policies.


## Key Findings and Implications


The comprehensive research yielded several critical findings:


**Definitional Failures**: The widespread failure to define key terms in digital policies creates dangerous ambiguities that can be exploited for information control, suggesting that policy development must prioritise clear, precise definitions.


**Government Oversight Concerns**: The tendency to place oversight authority directly in government hands raises serious concerns about potential misuse of well-intentioned policies.


**Diversity Challenges**: Many policies fail to adequately protect diverse voices in the media landscape, often favouring large operations over smaller, independent outlets.


**Cross-Border Policy Migration**: Policies developed in one jurisdiction often influence or are directly copied by others, sometimes for harmful purposes, emphasising the global responsibility that comes with policy development.


**Public Behaviour Complexity**: The research challenged assumptions about disinformation effectiveness, suggesting that individual choice and political bias may be more significant factors than external manipulation campaigns.


## Future Research and Recommendations


Mitchell announced that CNTI plans to spend more time examining the relationship between public behaviour and disinformation policy effectiveness. The organisation is developing a research working group focused on public response to AI content labelling and watermarking systems.


Key recommendations included:


– Collaborative, data-driven conversations among policymakers, technology companies, media organisations, researchers, and civil society


– Policy design based on clear understanding of how the public actually seeks and consumes information


– Clear articulation of desired digital information landscape goals before implementing content moderation policies


– Integration of specific safeguards against government overreach and mechanisms to protect diverse, independent voices


## Conclusion


Mitchell’s research revealed the complex and often contradictory nature of contemporary digital policy as it affects journalism and press freedom. While many policies intend to protect the information space and support quality journalism, implementation often falls short of these goals and may create new threats to press independence and diversity.


The findings suggest that effective digital governance requires more sophisticated understanding of public behaviour, global policy dynamics, and the changing nature of information production and consumption. The research provides a valuable framework for approaching these complex issues, emphasising the need for international coordination and careful consideration of unintended consequences in policy development.


As Mitchell emphasised, the challenge lies in creating policies that actually serve public information needs while protecting the diverse, independent media ecosystem that democracy requires.


Session transcript

Amy Mitchell: Hello, hello, that’s loud. I’m Amy Mitchell from the Center for News Technology and Innovation and I look forward to talking with you today about safeguarding diverse independent news media in policy. We are at a point in our society today where we are passing, debating and passing more laws that relate to journalism both directly and indirectly than we ever have. This is occurring at the same time that it is harder than ever to put borders around what journalism is and what it is not and that is from the perspective both of the business and legal kind of laws and policy space as well as what the public considers journalism and the things and sources that they rely on to keep them informed on a daily basis. We are also seeing a growing array of issues in the policy space that relate to journalism. Everything from content moderation to protection of the internet to disinformation to artificial intelligence, again, times that are directly related and other times where it’s indirectly related. In this space of the digital landscape, policies that are passed in one country very much tend to impact and be impacted by policies that are passed in another country. It’s very important to be thinking about these things across country and regional borders. We are seeing all of this happen amid a time when we are facing growing government encroachment on information control and on press freedoms. This comes through both in data we gather from journalists themselves. What you see in front of you is an international survey of journalists that CNTI conducted with a number of partnership organizations like GFMD, Global Fund for Media Development, Online News Association, and several others around the world. You see here that 50% of the journalists that we surveyed, this was in the fall of last year, had experienced some form of government censorship ranging from not being allowed to cover or access an event to complaints about their content to imprisonment. 50% had experienced at least one form of that in the last year. We’re also seeing the world press freedom scores across the board go down from the entities that are tracking this data year in and year out, so much so that we are down to 1993 levels of world press freedoms. This is occurring both when we look at the government censorship as well as the independent protection of journalists in this space. We’re also hearing it in the conversations that we have. One of the things CNTI does is host convenings, which are really daylong working sessions with a combination of folks from journalism, technology, policy space, research, civil society, to talk about these issues. This is one that we held in Mexico City with OEM, was our co-sponsor there. This conversation specifically focused on how to continue to produce journalism amid ongoing security threats, both on and offline. A lot of the discussion that came up from those in the rooms had to do with not only feelings of safety and other kinds of online abuse, but also in terms of the ways that policy had been used, policy that was technically in theory aimed at protection of information, or if journalists was being actually used to imprison or otherwise control journalists and the information space. If we look across the policy space amid this landscape today, what we find in the data is that even the best intended policy, that which is really looking to safeguard our information space to create vibrant digital landscapes, can end up having unintended consequences on the independence of our press and on journalism viability more broadly. The question becomes, how can policy address the issue areas of concern, of which we’re talking about here this week, while safeguarding an independent, diverse media and the public’s access to a plurality of fact-based news? That’s where CNTI comes in. The Center for News Technology and Innovation is a global research center. We’ve been around not quite two years, a little over a year and a half now, but we are an organization that focuses on enabling independent, sustainable news media, maintain an open internet, and foster more informed public policy discussions. We do this by conducting research as well as synthesizing research from others. My background is research. I come from 25 years at the Pew Research Center. I helped launch the journalism line of research there, directed that for many years. I have now decided to move into this space. We also help synthesize research. As a research community, I think we do a pretty lousy job sometimes of helping make sense of what our research all adds up to, where there are gaps, and where we need to do more. We then host convenings, like I was talking about, to try to really work through some of the challenging questions in these spaces and come up with informed solutions. Back to the question, how can policy address issue area concerns while safeguarding an independent, diverse news media and a vibrant digital landscape? Here we go to some of the research that CNTI has done over the last year that I’m going to talk about over the next few minutes. What we do is range from policy analysis to issue primers to surveys. We did the journalist survey. We’ve done a four-country public survey as well that both was a mix of focus groups as well as fully representative statistical surveys, convenings around the world, and more. What I put forward today is a set of questions that we have found to be critical questions to explore in any digital policy that is being debated and thought about today. Again, both those that directly relate to journalism, but also very much so where the journalism and a vibrant digital landscape are likely to be impacted, even if not directly related in the policy. I’m going to spend more time on each of these, so I’m going to run through them very quickly right now. The first question is, what’s the definitional language? We started with a lot of our work just saying, how is journalism being defined here? How about a journalist? How about news? The other areas that are being talked about in this policy, what’s the definitional language and how consistent or inconsistent might that be across policy in ways that matter? The independence of journalism, a big question to be spending a lot of time on is, who is given that oversight authority to determine the details of how a law gets practiced and enacted? Diversity, one of the things that the internet brought was a diverse landscape of information in a way that served minority communities, bringing new kinds of voices into this space. How do we work to build a vibrant digital landscape in a way that still safeguards that diversity of voice, especially in the news and information space? Serving the public, we talk about doing all of this to serve the public. How are we actually serving the public in these policies, especially in the behaviors and the ways that they access and get information today? Social relevance, being forward-thinking, cross-border impacts, and also unintended consequences that may or may not be clearly evident on the surface. These questions to explore all take time and deep thought and collaborative discussion. The first study I’m going to share a little bit about is one that we did that looked at fake news policies around the world. And we put the fake news in quotes many of these actually use the language of addressing fake news. We looked at 32 policies That were proposed for enacted between 2020 and 2023 They cut across 31 countries and so I’ll say straight out that there are more of these that were in a talker in autocracies, but 11 of the countries included here are democracies and so the findings that that Really resonate across different government types in terms of some of the takeaways that can happen and overall the the overall conclusion was that overall these policies created greater risk to journalistic independence and Diversity as well as the public’s access to a diversity of fact-based news than they did to actually safeguard the information space Get back getting back to definitions one of the first things we looked at here was How is fake news or illegal content defined And how might news or journalism or what might be considered real news? Be defined and you can see here that only seven of the 32 actually put a definition around what fake or illegal content was and the remaining left that vague which leaves it up to the Authority figure right the one who gets to make the decision about the enactment of that policy to put those definitions in place same thing on the news and journalism only to Actually spoke to what real news or journalism might be and this is very much a double-edged sword We talked a lot about this with some of the folks at UNESCO guy Berger was an advisor on this on this project he’s done a lot of work with UN and and information integrity and one of the things that You know we talked about in this report is a degree to which Well defining these things in policy can help safeguard Journalism and an independent press it can also be language that gets used against journalism and an independent press So it’s very important both to really consider the definitional language But also to then get to the next question, which is okay. Who’s the authority figure? Who’s the one that then gets to determine what that language means That’s what we looked at next in this study and you can see 14 of the 32 policies very specifically put the control of the authority there in the hands of the government itself and Most of those were talks about the central government being the ones that that was in control There were others that that gave the authority to some sort of body within the government where it was often unclear how closely Associated that other body was to the central government figure or not and the remaining Left it unclear as to who had authority to arbitrate that law or that policy Which then naturally puts it back in the hands of the government? The next question then becomes okay What’s the punishment if you get if you get put in to Determine that you were a part of this fake news and the bulk of these policies did have some sort of Imprisonment and it ranged from Less than one month in less those so I think up to over three years In Zimbabwe and so there was real government action that can be taken against journalists by the uses of but by the language in these policies and so before I move on to the next study if we just broaden this out to a question about content moderation policy more Broadly, one of the things that’s really important to ask in that space of policy Disinformation content moderation etc. Is what is the end goal for the way? The content is going to look and I’m not sure one of the things that we’re spending more time in the coming year looking at Is that very question? It’s not clear within the policy conversation space that we’ve really actually done a very good job at all of Articulating what would the digital landscape look like if this content moderation policy that we’re talking about gets put in what is it? That’s bad. That’s out. What is it? That’s good. That’s in what’s the mix? There’s always going to be a mix of content in there. So really taking the time to think about the ultimate goals there The second study that I’ll share a little bit about was this very specifically this news an example of one that specifically looked at Was very directly related to journalism in the media space So these were media remuneration policies which are basically revenue policies looking to be a revenue stream to add some revenue Financial lifeblood into the journalism industry, which many of you know has been having a hard time lately with its financial structures and support and this looked at 23 policies that were considered or passed from 2018 through August of 2024 And there’s a pretty wide mix. I will say it does include also a number of state Policies in the u.s. Because the state policy space in the u.s. Is very active these days. So there are quite a number in this That are included the first thing we did here was start a framework And this is a something else that we really recommend when you’re in a complex policy space with a wider range of the kind of Focus or orientation of these policies is to say, okay What’s the actual financial structure or the oriented subject? Orientation of how this is going to work and you can see here that the first three are really around actually usage usage Criteria digital interaction that then says that that warrants some sort of compensation The second set is subsidies that are either coming directly from tech platforms or in some cases from the government itself and then the third is a Tax mechanism, which is either creating or building off of new taxes and once you get this framework in place for whatever your subject area your policy is you can any new policy that comes in you Figure out where does it fit in into the framework that we’ve created? So then we broke the analysis into two parts again with these core questions that I showed you all earlier in the top of our minds and the first was definitional in the sense that it was how is how is the usage and interaction of digital content determined and Then when that definitional boundary is put on if and and then if so To what degree what amount of content should be paid for that content? How does that decision get made? When is it appropriate to charge for digital usage is compensation for digital usage applied consistently? Who benefits then who gets that money? Where does that money actually go and then the second half of the analysis looks at the questions of these core? Viability or sustainability elements to journalism. How do we keep an independent news media? How do we support diversity in this space? How do we sustain journalism that is actually serving the way the public gets informed today? And so I’m gonna walk through this really quickly because this is a lightning talk But there’s a lot more detail on the website if you want to go into any more of it and I welcome I’d be happy To talk with anybody about it in in greater detail as well so first on the digital usage side what this really shows is just when you look at what the Criteria of usage was or is articulated across these policies. It varies Dramatically and so the two green Circles are actually at the sort of what’s at the ends of the spectrum in terms of where what you know? Usage could be in time in terms of content and then you see here inside that it ranges from things like Clicking on a URL link to having a title of the article to actually well It’s usage of content for indexing or it’s creating an article summary and having that summary that that warrants content And so the definition of what usage and interaction actually means varies dramatically across these policies and so too Does what the level of compensation would be? as well as who that compensation would be going to. In some cases, it’s going to the organizational level. In some cases, it’s going to individual journalists. Usually, it’s actually at the organizational level. There’s been more in the recent months that has been shifted to going to journalists or journalism producers themselves. So we see a great variety there, which brings us back to the importance of really articulating from the get-go what the goal is and how do we use the best language to clearly articulate and in a consistent way what we mean here. And then the second half of the study, as I mentioned, looks at these journalistic viability elements. First, independence and diversity, public interest and access. And when we talk about independence, as we saw in the fake news study, anytime you’re creating policy, you give the government a role, which isn’t a bad thing, right? That’s what policy is for. But it does mean it’s really important to think about what are the safety mechanisms in place to be sure that in this case, particularly in the journalism and news space, that it doesn’t end up giving an individual government or figure, as the years progress, the ability to take control over the information space. And one of the things we saw in these policies was that in many cases, even if it was left unclear as to who had that authority to arbitrate the law, and if there were third party or third agencies, kind of how that got determined and how it would get determined over time. Diversity, again, the biggest thing we saw was that it was a really inconsistent and kind of haphazard approach to the diversity of news media that would be in there. A lot of these that began really ended up oriented more around your very large news operations and outlets. And some, as we got further into the policy timeline, called out ethnic, certain minority kinds of media or ethnic news outlets. There was only those that really focused on the tax extension element that focused on local journalism itself. And so how do we be sure that policies in this arena are going to support that diversity of voices in the journalism space that has been so valuable to the public? And then finally, public interest. And I will say on the innovation side, only the EU directive and one state law, state policy, New Jersey, in the states, actually spoke at all about being forward looking and the innovation side of technology and where that might lead us in the future. On the public interest side, all of the talk is about serving the public. And there were references to the public in there. But what was unclear was how these steps actually do a better job of getting the news and information to the public, especially thinking about the ways that the public is getting information, the diversity of producers of journalism that the public is turning to, including oftentimes many smaller individual journalism producers that the public has come to trust and rely on. The third policy area, which we are just in the early stages of now, so I’m just going to give you sort of a touch of the framework that we’re using, is in the AI policy space. And here there’s so much policy being talked about, being enacted, many are not actually law, can be enforced legally at this point. But very, very few in this space talk about journalism or really the news information space directly at all. But if we know about what’s happening with AI and where the digital landscape more broadly, there very much is indirect and in the end direct relationship with the way AI policy can and likely would affect the digital news landscape. So thinking about those things inside these other policies is really important before they get passed and too far down the line. How can it affect journalists’ ability to make use of the benefits of technology while also safeguarding against the risks? And then how do these policies work across state and national and country lines? So this is the framework in general, and I’m not going to go through this in detail because I want to have time for questions. But one, again, we’re starting with, okay, what’s the range of the kind of policies that are out there? And you can see here there are many that create a committee or an agency. Okay, do those committee or agencies have somebody from the journalism sector to play a role, to be a part of that? Do they take information integrity into account? Those that focus on deep fakes, synthetic content, this is one where we look at, okay, what’s the impact on journalists and their reporting? There’s a lot of good that can be had by having some sort of policies in place around deep fakes, but how does that resonate with the way journalists, some of what the journalists need to do, especially in unsafe areas, to be able to get their information out there? What labels? How do labeling these kind of things have public relevance? If you think about watermarks. CGPA was up here last week. There’s a lot of good in that kind of identity inside content, especially that’s AI-driven. But what does it mean to the public? We’ve done a lot of research and actually have started a research working group that’s global on this topic specifically to try to help make sense of what the research says about the public response to labeling, to not labeling, and how are there ways that journalists and others can make use of labels while actually fostering public trust with technology and with their work, as opposed to further diminishing it. There’s the focus on algorithmic bias and discrimination, and one of the things that’s important in this area of policy I’ll just mention as an example is that in that and the frontier model, there’s a lot of liability that goes on the user. That can be the public. It can be companies. It can also be the journalist that’s using it. It’s very important to think about who that user space may be and are there ways that the policy would want to define that and make sure that that is very clear in terms of who can be liable for content if there’s some sort of negative usage effect that comes out of it. Then comprehensive regulation. Again, we’re just in the beginning stages of this analysis and I will look forward to sharing it when the team is done. I’m just going to close with a couple more pieces of data that get to the public side of all this because ultimately this is about safeguarding an independent press and a diverse news media is about serving the public in the digital information space. It’s really important as we all delve deep into policy deliberation and making decisions that we don’t forget about the public that we’re saying we’re serving. How does the public think about journalism, about the ways that they can get informed today? That has been greatly expanded inside the digital landscape today. You can see here from this data, these are the four countries that we did this survey in, that there’s great value that the public places on journalism and the role that it plays in society. That really cut across the board. But we also see in the data that the public has a very broad definition of who can be producers of journalism today. It may be somebody inside an organization. It may be an individual who’s working on their own or doing their own work. What came through in the follow-up focus group discussions that we had is that it’s mission-driven. It’s guided by principles. It’s all of those elements that we think about of journalism, but it doesn’t necessarily have to be a news operation. So when we think about policy and for putting implications of policy in a space, how does all that work when we’re thinking about policy? And then finally, we can see that people are going to individuals that they consider journalists for their content today. And this is the final one, which is to also remember as we’re doing these policies that both the journalists, the journalism producers, and the public see technology as critically important in their ability to produce the news, to gather it, to disseminate it, and also to get informed. So again, coming back to these policy questions of how do we then do the best job of enabling the benefits and the needs in this space while also guarding against the potential harms. Quick wrap-up of the questions that we suggest you keep in mind. These are for journalism policies specifically, but they really carry through in a little bit of a nuanced way for all digital policy. And all in all, what’s most important is that in this critical policy space that policymakers, technology companies, media companies, journalism producers, researchers, civil societies actually work together to have really thorough conversations that are driven by data, a seeking of knowledge that keep the public interest in mind to be able to keep up with changing technology and determine how best to mitigate the risks while enabling the benefits. Thank you. And I’d be happy to take questions. You can sign up for our newsletter here, follow us, different places, the website here. Any questions?


Audience: Yeah, thank you so much. I’ve got two questions. First one relating to your, well, both actually relating to your sample choices of your first study that I think 31 countries you had covered. I was a little surprised that you, in the European Union, you took in four individual member states rather than the EU as a whole, because I think most of the aspects covered are now by the DSA, are covered by the Digital Services Act. So these laws, the national laws, have mostly become obsolete. And the second question is, to put it directly, what is the point of having really established autocracies among the sample, where it’s obvious and clear that they will use any excuse to control the information sphere and to use laws against disinformation to control, yeah, to exercise control. So what’s the point of having that? That is just, to me, obvious. And what conclusions can we draw from that?


Amy Mitchell: Yeah, those are two great questions. Thank you. On the EU, and I can share more on the methodology offstage, but the EU Online Safety Act is the one that was in place, and that was actually broader and didn’t talk directly about fake news. It talked about online safety. What we looked at instead were specific country policies that related directly to fake news or illegal content online, as opposed to broader online safety. We have a whole footnote in there in our methodology on that specific decision, but it’s a good question. And really, it’s also about examples. So the CNTI, Center for News Technology Information, does not advocate or call out for a specific policy. And what this is doing is saying, what’s the range of what’s out there, and how can we learn from it when we look at that, as opposed to commenting very specifically on one particular policy or another. But there is a whole methodology that does go into the detail on that decision and the timing of when we had the cutoff for those selections. On the autocracies, good question. One, they’re countries. They have people that live in them that get affected by the laws. We should care. That’s number one. Number two is, as I was mentioning earlier, there are so many policies that carry impact from one country to the next, whether it’s copycatting, and we saw that there was a policy that was done in a very well-intended, proactive way inside a democratic country that got then pulled by India to use for information control. So it’s also important to think about the ways that policy can be taken by another entity and used for ill service when it comes to the information landscape. But it’s also something, especially if we’re here at the UN and IGF talking about collaboration and how do we really have supportive environments to be aware of what’s happening in other countries. Do we have time for one more?


Audience: Hello. Thank you so much. That was super interesting. I specifically appreciate that you brought in that we should look more at the public perspective of this and serving, and the people we’re actually trying to serve. And so I was wondering, because these policies, especially disinformation, they get so tricky with the definitions, as you mentioned. I was wondering if you would recommend that actually countries should also put more emphasis on the demand side of disinformation. So why do people maybe believe disinformation? Why do people engage with disinformation? And why is it so easy for them to look at that? And I know that, for example, in Sweden, there’s this psychological defense agency by the government that kind of tries to prepare the population a little better how to engage with disinformation, how to recognize it. And I was just wondering if that would be a different approach to look at that more in terms of policy.


Amy Mitchell: Thank you for the question. It’s certainly an important element of it, right, is what is the public doing with all this content. There is actually a fair amount of research, and there’s a lot that actually shows the disinformation campaigns don’t have a whole lot of impact on what people actually believe or don’t believe, but that people’s own behavior and where they’re choosing to go can have that. One of the biggest things that we see, though, and this was some research I did back in the days that I was at Pew, is that what the public would categorize as disinformation can vary greatly, right? And so we have clear evidence to see that the way that the public would say, well, that’s disinformation, and our data show, at least in this one study we did, that there are very much alignments to one’s political thinking, to the kinds of sources you turn to. That’s a broader societal question, too, right? So I think your question comes back to the content moderation slide that I showed, which is articulating what’s the goal of the policy, and what’s the goal of the information landscape. I mean, it’s not going to be perfect. We’ve never had a perfect information landscape. So what is the goal, and then what are the best mechanisms to put in place that do the best job of reaching that, as close as we can get, without other unneeded risks and harms that can be brought into place? It’s a really tricky balancing act. It’s an area that CNTI plans to spend more time examining in the coming year. Thank you all. My time is up. Thank you.


A

Amy Mitchell

Speech speed

159 words per minute

Speech length

4881 words

Speech time

1831 seconds

Growing number of laws affecting journalism directly and indirectly, making it harder to define what journalism is

Explanation

Mitchell argues that society is currently passing and debating more laws related to journalism than ever before, occurring simultaneously with increased difficulty in defining journalism boundaries. This affects both business/legal policy spaces and public perceptions of what constitutes journalism and reliable information sources.


Evidence

Growing array of policy issues from content moderation to AI protection, disinformation policies, with digital landscape policies in one country impacting others


Major discussion point

Current State of Journalism and Policy Landscape


Topics

Freedom of the press | Content policy | Legal and regulatory


50% of surveyed journalists experienced government censorship in the past year

Explanation

Based on an international survey conducted by CNTI with partnership organizations, half of the journalists surveyed had experienced some form of government censorship. This censorship ranged from being denied access to events to receiving complaints about content to imprisonment.


Evidence

International survey conducted in fall of last year with GFMD, Global Fund for Media Development, Online News Association, and other partnership organizations


Major discussion point

Current State of Journalism and Policy Landscape


Topics

Freedom of the press | Human rights principles | Cybersecurity


World press freedom scores have declined to 1993 levels globally

Explanation

Mitchell presents data showing that global press freedom has deteriorated significantly, with current levels matching those from 1993. This decline affects both government censorship issues and independent protection of journalists.


Evidence

Data from entities tracking press freedom scores year over year, showing consistent decline in world press freedom scores


Major discussion point

Current State of Journalism and Policy Landscape


Topics

Freedom of the press | Human rights principles


Policy intended for protection is being used to imprison and control journalists

Explanation

Through convenings and discussions with journalism professionals, Mitchell found that policies theoretically designed to protect information or journalists are actually being used to imprison or control journalists and the information space. This represents a significant unintended consequence of well-intentioned policy.


Evidence

Conversations from CNTI convenings, including one in Mexico City with OEM focusing on producing journalism amid security threats, where participants discussed policy being used against journalists


Major discussion point

Current State of Journalism and Policy Landscape


Topics

Freedom of the press | Legal and regulatory | Human rights principles


Need for critical questions when analyzing digital policy: definitional language, independence, diversity, public service, and unintended consequences

Explanation

Mitchell proposes a framework of essential questions that should be explored in any digital policy debate. These questions address how journalism and related terms are defined, who has oversight authority, how diversity is maintained, how the public is served, and what unintended consequences might arise.


Evidence

CNTI research over the past year including policy analysis, issue primers, surveys, four-country public survey with focus groups and statistical surveys, and global convenings


Major discussion point

Policy Analysis Framework and Research Methodology


Topics

Legal and regulatory | Content policy | Human rights principles


Importance of examining who has oversight authority to determine how laws are enacted

Explanation

Mitchell emphasizes that a critical question in policy analysis is identifying who receives the authority to determine the details of how laws are practiced and enacted. This authority assignment significantly impacts the independence of journalism and can determine whether policies protect or harm press freedom.


Evidence

Analysis of fake news policies showing 14 of 32 policies placed control directly in government hands, with most focusing on central government control


Major discussion point

Policy Analysis Framework and Research Methodology


Topics

Legal and regulatory | Freedom of the press | Human rights principles


Cross-border policy impacts require thinking beyond national boundaries

Explanation

Mitchell argues that in the digital landscape, policies passed in one country significantly impact and are impacted by policies in other countries. This interconnectedness makes it essential to consider policy implications across country and regional borders rather than in isolation.


Evidence

Example of well-intended policy from a democratic country being adopted by India for information control purposes


Major discussion point

Policy Analysis Framework and Research Methodology


Topics

Legal and regulatory | Jurisdiction | Digital business models


Study of 32 fake news policies across 31 countries showed greater risk to journalistic independence than protection of information space

Explanation

CNTI’s analysis of fake news policies from 2020-2023 found that these policies created more risk to journalistic independence and diversity, as well as public access to fact-based news, than they provided protection for the information space. This finding applied across both democratic and autocratic countries.


Evidence

Analysis of 32 policies across 31 countries between 2020-2023, including 11 democracies, with findings consistent across different government types


Major discussion point

Fake News Policy Analysis


Topics

Freedom of the press | Content policy | Legal and regulatory


Disagreed with

– Audience

Disagreed on

EU policy analysis methodology – individual member states vs. EU-wide legislation


Only 7 of 32 policies defined what constitutes fake or illegal content, leaving definitions to authority figures

Explanation

Mitchell’s research revealed that most fake news policies failed to clearly define what constitutes fake or illegal content, with only seven policies providing definitions. The remaining policies left these crucial definitions vague, effectively placing definitional power in the hands of authority figures who implement the policies.


Evidence

Detailed analysis of definitional language in 32 fake news policies, with only 2 policies defining what constitutes real news or journalism


Major discussion point

Fake News Policy Analysis


Topics

Content policy | Legal and regulatory | Freedom of the press


14 policies placed control directly in government hands, with imprisonment penalties ranging from less than one month to over three years

Explanation

The study found that nearly half of the analyzed policies gave direct control to government entities, typically central governments, to arbitrate and enforce the laws. Most policies included imprisonment as punishment, with sentences varying dramatically from less than one month to over three years, with Zimbabwe having the longest sentences.


Evidence

Specific analysis showing 14 of 32 policies with government control, imprisonment penalties ranging from less than one month to over three years in Zimbabwe


Major discussion point

Fake News Policy Analysis


Topics

Legal and regulatory | Freedom of the press | Human rights principles


Analysis of 23 revenue-focused policies from 2018-2024 showed dramatic variation in defining digital content usage and compensation criteria

Explanation

CNTI’s study of media remuneration policies revealed significant inconsistency in how digital content usage is defined and what warrants compensation. The criteria ranged from simple URL clicks to article summaries, with equally varied compensation levels and recipient structures.


Evidence

Analysis of 23 policies from 2018 through August 2024, including US state policies, showing usage criteria ranging from clicking URL links to creating article summaries


Major discussion point

Media Remuneration Policy Analysis


Topics

Digital business models | Intellectual property rights | Legal and regulatory


Policies inconsistently addressed diversity of news media, often favoring large operations over smaller outlets

Explanation

Mitchell found that media remuneration policies took an inconsistent and haphazard approach to supporting diverse news media. Many policies ended up favoring large news operations and outlets, with only some later policies specifically addressing ethnic minority media or local journalism through tax mechanisms.


Evidence

Analysis showing policies initially favored large operations, with some later policies calling out ethnic minority media, and only tax-focused policies supporting local journalism


Major discussion point

Media Remuneration Policy Analysis


Topics

Cultural diversity | Digital business models | Legal and regulatory


Most policies failed to clearly articulate how they would better serve public information needs

Explanation

While all media remuneration policies claimed to serve the public interest, Mitchell found that they failed to clearly explain how their mechanisms would actually improve public access to news and information. The policies didn’t adequately consider how the public actually consumes information or the diversity of journalism producers the public relies on.


Evidence

Analysis showing policies referenced serving the public but lacked clear articulation of how steps would better deliver news to public, especially considering diverse journalism producers


Major discussion point

Media Remuneration Policy Analysis


Topics

Digital access | Content policy | Human rights principles


Agreed with

– Audience

Agreed on

Need for alternative approaches to disinformation beyond content restriction


Few AI policies directly address journalism, but they have indirect impacts on the digital news landscape

Explanation

Mitchell argues that while AI policies rarely mention journalism or news information directly, they have significant indirect and eventual direct relationships with the digital news landscape. This makes it important to consider journalism implications before AI policies are passed and implemented.


Evidence

Early-stage analysis of AI policy space showing very few policies directly addressing journalism or news information, but with clear indirect impacts on digital news landscape


Major discussion point

AI Policy Framework and Implications


Topics

Legal and regulatory | Future of work | Digital standards


Need to consider how AI policies affect journalists’ ability to benefit from technology while guarding against risks

Explanation

Mitchell emphasizes the importance of examining how AI policies can impact journalists’ ability to utilize technological benefits while also providing protection against potential risks. This requires careful consideration of both opportunities and threats that AI policies present to journalism.


Evidence

Framework analysis examining range of AI policies including committees, agencies, deep fakes, synthetic content, labeling, and algorithmic bias considerations


Major discussion point

AI Policy Framework and Implications


Topics

Future of work | Digital standards | Legal and regulatory


Liability often falls on users, including journalists, requiring clear policy definitions

Explanation

In AI policy analysis, Mitchell found that liability frequently falls on users, which can include the public, companies, and journalists using AI technology. This makes it crucial for policies to clearly define who constitutes a user and under what circumstances they can be held liable for content or negative usage effects.


Evidence

Analysis of algorithmic bias, discrimination policies, and frontier models showing liability placement on users, with need for clear user space definitions


Major discussion point

AI Policy Framework and Implications


Topics

Legal and regulatory | Liability of intermediaries | Future of work


Public places great value on journalism’s role in society but has broad definitions of who can be journalism producers

Explanation

Mitchell’s four-country survey revealed that the public highly values journalism’s societal role across all surveyed countries. However, the public also maintains a very broad definition of who can produce journalism, including individuals working independently, as long as the work is mission-driven and guided by journalistic principles.


Evidence

Four-country survey with focus groups and representative statistical surveys showing public value for journalism and broad definitions of journalism producers as mission-driven and principle-guided


Major discussion point

Public Perspective and Engagement


Topics

Content policy | Cultural diversity | Digital identities


Both journalists and public see technology as critically important for news production and consumption

Explanation

Mitchell’s research demonstrates that both journalism producers and the public view technology as essential for gathering, producing, disseminating, and consuming news. This mutual dependence on technology underscores the importance of policies that enable technological benefits while protecting against potential harms.


Evidence

Survey data showing both journalists and public consider technology critically important for news production, gathering, dissemination, and consumption


Major discussion point

Public Perspective and Engagement


Topics

Digital access | Future of work | Digital standards


Agreed with

– Audience

Agreed on

Importance of addressing public perspective in disinformation policy


Response that autocratic policies matter because they affect real people and can be copied by other countries for harmful purposes

Explanation

When questioned about including autocracies in policy analysis, Mitchell argued that these policies matter because they affect real people living under those governments. Additionally, policies from autocratic countries can be copied or adapted by other nations, and even well-intentioned democratic policies can be misused by autocratic regimes for information control.


Evidence

Example of well-intended policy from democratic country being adopted by India for information control, demonstrating cross-border policy copying for harmful purposes


Major discussion point

Methodological and Scope Questions


Topics

Human rights principles | Freedom of the press | Legal and regulatory


Disagreed with

– Audience

Disagreed on

Methodological approach to including autocracies in policy analysis


A

Audience

Speech speed

142 words per minute

Speech length

319 words

Speech time

134 seconds

Question about whether countries should focus more on demand side of disinformation – why people believe and engage with it

Explanation

An audience member suggested that countries should emphasize understanding why people believe and engage with disinformation rather than just focusing on supply-side controls. They questioned whether addressing the psychological and behavioral aspects of disinformation consumption might be more effective than content-focused policies.


Evidence

Reference to Sweden’s psychological defense agency that prepares the population to better engage with and recognize disinformation


Major discussion point

Public Perspective and Engagement


Topics

Content policy | Online education | Human rights principles


Agreed with

– Amy Mitchell

Agreed on

Importance of addressing public perspective in disinformation policy


Suggestion that psychological defense approaches, like Sweden’s agency, could be alternative policy approaches

Explanation

The audience member proposed that psychological defense mechanisms, such as Sweden’s government agency that helps prepare the population to recognize and engage with disinformation, could represent an alternative policy approach. This would focus on building public resilience rather than content restriction.


Evidence

Sweden’s psychological defense agency as an example of government efforts to prepare population for disinformation recognition and engagement


Major discussion point

Public Perspective and Engagement


Topics

Online education | Content policy | Capacity development


Agreed with

– Amy Mitchell

Agreed on

Need for alternative approaches to disinformation beyond content restriction


Question about including individual EU member states rather than EU-wide Digital Services Act in the study sample

Explanation

An audience member questioned the methodology of including four individual EU member states in the fake news policy analysis rather than examining the EU-wide Digital Services Act. They suggested that national laws may have become obsolete due to the overarching EU legislation.


Evidence

Reference to EU Digital Services Act covering most aspects that were previously handled by individual member state laws


Major discussion point

Methodological and Scope Questions


Topics

Legal and regulatory | Jurisdiction | Content policy


Disagreed with

– Amy Mitchell

Disagreed on

EU policy analysis methodology – individual member states vs. EU-wide legislation


Challenge regarding the value of including autocracies in policy analysis when their control intentions are obvious

Explanation

An audience member questioned the analytical value of including established autocracies in the policy study sample, arguing that it’s obvious these governments will use any excuse to control information and exercise control over the information sphere. They questioned what conclusions could be drawn from such predictable behavior.


Major discussion point

Methodological and Scope Questions


Topics

Freedom of the press | Human rights principles | Legal and regulatory


Disagreed with

– Amy Mitchell

Disagreed on

Methodological approach to including autocracies in policy analysis


Agreements

Agreement points

Importance of addressing public perspective in disinformation policy

Speakers

– Amy Mitchell
– Audience

Arguments

Both journalists and public see technology as critically important for news production and consumption


Question about whether countries should focus more on demand side of disinformation – why people believe and engage with it


Summary

Both speakers recognized the critical importance of understanding and addressing the public’s role in information consumption, with Mitchell emphasizing technology’s importance to both producers and consumers, and the audience member suggesting focus on why people engage with disinformation


Topics

Content policy | Online education | Digital access


Need for alternative approaches to disinformation beyond content restriction

Speakers

– Amy Mitchell
– Audience

Arguments

Most policies failed to clearly articulate how they would better serve public information needs


Suggestion that psychological defense approaches, like Sweden’s agency, could be alternative policy approaches


Summary

Both speakers implicitly agreed that current content-focused approaches are insufficient, with Mitchell noting policies fail to serve public needs and the audience member proposing psychological defense mechanisms as alternatives


Topics

Content policy | Online education | Capacity development


Similar viewpoints

Both recognize that the public’s perspective and behavior are central to understanding and addressing information challenges, whether in defining journalism or in consuming/believing information

Speakers

– Amy Mitchell
– Audience

Arguments

Public places great value on journalism’s role in society but has broad definitions of who can be journalism producers


Question about whether countries should focus more on demand side of disinformation – why people believe and engage with it


Topics

Content policy | Cultural diversity | Online education


Unexpected consensus

Value of studying autocratic policies despite predictable outcomes

Speakers

– Amy Mitchell
– Audience

Arguments

Response that autocratic policies matter because they affect real people and can be copied by other countries for harmful purposes


Challenge regarding the value of including autocracies in policy analysis when their control intentions are obvious


Explanation

While the audience member initially challenged the value of studying autocratic policies, Mitchell’s response about cross-border policy copying and real human impact created an unexpected area of understanding about the interconnected nature of global policy effects


Topics

Human rights principles | Freedom of the press | Legal and regulatory


Overall assessment

Summary

The discussion showed limited but meaningful consensus around the importance of public-centered approaches to information policy and the recognition that current content-focused policies may be insufficient


Consensus level

Moderate consensus on methodological approaches and public engagement importance, with constructive dialogue rather than disagreement on policy analysis scope. The consensus suggests a shared understanding that effective information policy requires deeper consideration of public behavior and cross-border implications.


Differences

Different viewpoints

Methodological approach to including autocracies in policy analysis

Speakers

– Amy Mitchell
– Audience

Arguments

Response that autocratic policies matter because they affect real people and can be copied by other countries for harmful purposes


Challenge regarding the value of including autocracies in policy analysis when their control intentions are obvious


Summary

The audience member questioned the analytical value of including established autocracies in policy studies since their intention to control information is predictable, while Mitchell argued that these policies matter because they affect real people and can be adopted by other countries for harmful purposes.


Topics

Freedom of the press | Human rights principles | Legal and regulatory


EU policy analysis methodology – individual member states vs. EU-wide legislation

Speakers

– Amy Mitchell
– Audience

Arguments

Study of 32 fake news policies across 31 countries showed greater risk to journalistic independence than protection of information space


Question about including individual EU member states rather than EU-wide Digital Services Act in the study sample


Summary

The audience member questioned why the study included four individual EU member states rather than examining the EU-wide Digital Services Act, suggesting national laws may be obsolete, while Mitchell defended the methodology based on focusing on specific fake news policies rather than broader online safety legislation.


Topics

Legal and regulatory | Jurisdiction | Content policy


Unexpected differences

Research methodology and scope decisions

Speakers

– Amy Mitchell
– Audience

Arguments

Cross-border policy impacts require thinking beyond national boundaries


Question about including individual EU member states rather than EU-wide Digital Services Act in the study sample


Explanation

The disagreement about research methodology was unexpected because it revealed different perspectives on how to analyze transnational policy frameworks. While Mitchell emphasized cross-border impacts and the value of examining diverse policy approaches, the audience member focused on regulatory efficiency and questioned the relevance of studying potentially obsolete national policies.


Topics

Legal and regulatory | Jurisdiction | Content policy


Overall assessment

Summary

The disagreements were primarily methodological rather than substantive, focusing on research approach and scope rather than fundamental policy principles


Disagreement level

Low to moderate disagreement level. The disagreements were constructive and focused on research methodology and analytical approaches rather than core policy goals. Both speakers appeared to share concerns about protecting press freedom and serving public interests, but differed on analytical frameworks and research scope. These methodological disagreements actually enhanced the discussion by raising important questions about how to effectively study and compare international policies.


Partial agreements

Partial agreements

Similar viewpoints

Both recognize that the public’s perspective and behavior are central to understanding and addressing information challenges, whether in defining journalism or in consuming/believing information

Speakers

– Amy Mitchell
– Audience

Arguments

Public places great value on journalism’s role in society but has broad definitions of who can be journalism producers


Question about whether countries should focus more on demand side of disinformation – why people believe and engage with it


Topics

Content policy | Cultural diversity | Online education


Takeaways

Key takeaways

Current journalism policy landscape is unprecedented in scope and complexity, with 50% of journalists experiencing government censorship and global press freedom at 1993 levels


Well-intentioned policies often create unintended consequences that harm journalistic independence and diversity rather than protecting the information space


Critical policy analysis framework should examine definitional language, oversight authority, diversity impacts, public service goals, and cross-border effects


Fake news policies across 31 countries showed most (25 of 32) failed to define key terms, leaving interpretation to government authorities, with 14 policies placing direct government control


Media remuneration policies vary dramatically in defining digital content usage and compensation, often favoring large outlets over diverse smaller operations


AI policies rarely address journalism directly but have significant indirect impacts on the digital news landscape through liability placement and content regulation


Public has broad definition of journalism producers beyond traditional news organizations, valuing mission-driven, principle-guided content creators


Technology is viewed as critically important by both journalists and public for news production, gathering, dissemination, and consumption


Policy impacts cross national borders through copycatting and international digital infrastructure, requiring global coordination


Need for collaborative approach involving policymakers, technology companies, media organizations, researchers, and civil society in data-driven policy discussions


Resolutions and action items

CNTI plans to spend more time in the coming year examining the relationship between public behavior and disinformation policy effectiveness


CNTI is developing a research working group focused on public response to AI content labeling and watermarking


CNTI will complete and share analysis of AI policy impacts on journalism when the team finishes their comprehensive study


Recommendation for policymakers to articulate clear goals for what the digital information landscape should look like before implementing content moderation policies


Unresolved issues

How to balance defining journalism and fake news in policy without creating tools for government control of information


What the optimal end goal should be for content moderation policies and digital information landscapes


How to ensure AI policies adequately protect journalists while enabling technological benefits


Whether demand-side approaches to disinformation (focusing on why people believe false information) should be prioritized over supply-side regulation


How to create consistent cross-border policy frameworks that respect national sovereignty while addressing global digital challenges


How to ensure media remuneration policies effectively serve public information needs rather than just supporting large media organizations


What constitutes appropriate liability distribution between AI platforms, users, and content creators including journalists


Suggested compromises

Balancing the need for policy definitions with safeguards against government overreach by carefully considering who has oversight authority


Including diverse stakeholders (journalism sector representatives) in AI policy committees and agencies rather than excluding media perspectives


Focusing on psychological defense and media literacy approaches alongside regulatory measures to address disinformation


Creating policy frameworks that enable technological benefits while implementing specific safeguards against identified risks


Developing policies that support both large and small journalism operations rather than favoring one over the other


Thought provoking comments

It’s not clear within the policy conversation space that we’ve really actually done a very good job at all of articulating what would the digital landscape look like if this content moderation policy that we’re talking about gets put in what is it? That’s bad. That’s out. What is it? That’s good. That’s in what’s the mix? There’s always going to be a mix of content in there. So really taking the time to think about the ultimate goals there

Speaker

Amy Mitchell


Reason

This comment is deeply insightful because it exposes a fundamental flaw in policy-making: the lack of clear vision for desired outcomes. Rather than focusing on technical mechanisms, Mitchell highlights that policymakers haven’t adequately defined what success looks like in the information landscape.


Impact

This observation reframes the entire discussion from ‘how to regulate’ to ‘what are we trying to achieve.’ It introduces a meta-level critique that challenges the foundation of current policy approaches and sets up the framework for more thoughtful policy design throughout her presentation.


What’s the point of having really established autocracies among the sample, where it’s obvious and clear that they will use any excuse to control the information sphere and to use laws against disinformation to control, yeah, to exercise control. So what’s the point of having that? That is just, to me, obvious.

Speaker

Audience member


Reason

This question is provocative because it challenges the methodology and underlying assumptions of comparative policy analysis. It forces consideration of whether studying authoritarian approaches has value when their intent to control information is predetermined.


Impact

This question shifts the discussion toward the interconnectedness of global policy and the practical implications of policy migration across different governmental systems. It leads Mitchell to articulate how well-intentioned democratic policies can be co-opted by authoritarian regimes, adding a crucial geopolitical dimension to the conversation.


I was wondering if you would recommend that actually countries should also put more emphasis on the demand side of disinformation. So why do people maybe believe disinformation? Why do people engage with disinformation? And why is it so easy for them to look at that?

Speaker

Audience member


Reason

This comment is thought-provoking because it fundamentally shifts the focus from supply-side regulation (controlling content) to demand-side intervention (addressing why people consume misinformation). It suggests a completely different policy approach focused on media literacy and psychological factors.


Impact

This question introduces a new dimension to the policy discussion, moving beyond content regulation to consider human behavior and education. It prompts Mitchell to acknowledge the complexity of public perception and the subjective nature of what constitutes ‘disinformation,’ adding nuance to the entire framework.


We also see in the data that the public has a very broad definition of who can be producers of journalism today. It may be somebody inside an organization. It may be an individual who’s working on their own or doing their own work… it’s mission-driven. It’s guided by principles. It’s all of those elements that we think about of journalism, but it doesn’t necessarily have to be a news operation.

Speaker

Amy Mitchell


Reason

This observation is crucial because it highlights the disconnect between traditional policy frameworks (which assume institutional journalism) and contemporary reality (where individual creators are considered journalists by the public). It challenges fundamental assumptions about who deserves protection under journalism policies.


Impact

This insight forces a reconsideration of how journalism protection policies should be structured. It suggests that current policy frameworks may be inadequate for protecting the diverse ecosystem of information producers that the public actually relies on, fundamentally challenging traditional approaches to media regulation.


There is actually a fair amount of research, and there’s a lot that actually shows the disinformation campaigns don’t have a whole lot of impact on what people actually believe or don’t believe, but that people’s own behavior and where they’re choosing to go can have that… what the public would categorize as disinformation can vary greatly… there are very much alignments to one’s political thinking, to the kinds of sources you turn to.

Speaker

Amy Mitchell


Reason

This comment is particularly insightful because it challenges the entire premise underlying much disinformation policy – that external disinformation campaigns are the primary problem. Instead, it suggests that individual choice and political bias are more significant factors, which would require entirely different policy approaches.


Impact

This observation fundamentally questions the effectiveness of content-focused disinformation policies and suggests that the problem may be more about political polarization and media consumption habits than external manipulation. It adds significant complexity to the policy discussion by suggesting that the problem may not be solvable through traditional regulatory approaches.


Overall assessment

These key comments collectively transformed what could have been a technical policy discussion into a fundamental examination of assumptions underlying digital governance. Mitchell’s insights about the lack of clear policy goals and the evolving nature of journalism challenged traditional regulatory frameworks, while audience questions pushed the conversation toward more nuanced considerations of global policy interconnectedness and human behavioral factors. The discussion evolved from presenting research findings to questioning the foundational premises of current policy approaches, ultimately suggesting that effective digital governance requires a more sophisticated understanding of public behavior, global policy dynamics, and the changing nature of information production and consumption. The comments created a progression from ‘what policies exist’ to ‘what should policies actually try to achieve’ to ‘are current approaches fundamentally flawed,’ resulting in a much more critical and comprehensive examination of digital policy challenges.


Follow-up questions

What would the digital landscape look like if content moderation policies get implemented – what content should be out vs. in, and what should the mix be?

Speaker

Amy Mitchell


Explanation

This is a fundamental question about policy goals that Mitchell identified as not being well-articulated in current policy discussions, which is critical for effective content moderation policy design


How do AI policies affect journalists’ ability to use technology benefits while safeguarding against risks, especially across different jurisdictions?

Speaker

Amy Mitchell


Explanation

This represents an ongoing research area that CNTI is just beginning to explore, focusing on the indirect impacts of AI policy on journalism


How can labeling and watermarking systems foster public trust with technology and journalism rather than diminish it?

Speaker

Amy Mitchell


Explanation

Mitchell mentioned they’ve started a global research working group on this topic to understand public response to labeling systems, which is crucial for effective implementation


How should liability be defined for AI users, including journalists, in frontier model policies?

Speaker

Amy Mitchell


Explanation

This is an important policy consideration as liability often falls on users, and it needs clarification for different user categories including journalists


What is the point of including autocracies in policy analysis samples when their control of information is obvious?

Speaker

Audience member


Explanation

This question challenges the methodology and value of including authoritarian regimes in comparative policy studies


Should countries focus more on the demand side of disinformation – why people believe and engage with it – rather than just content control?

Speaker

Audience member


Explanation

This suggests an alternative policy approach focusing on public education and psychological preparedness rather than content restriction


How can policies better serve the public’s actual information-seeking behaviors and their broad definition of journalism producers?

Speaker

Amy Mitchell


Explanation

Mitchell emphasized the need to understand how the public actually gets information today, including from individual journalists and diverse sources, to inform policy design


What are the best mechanisms to balance reaching information landscape goals while avoiding unintended risks and harms?

Speaker

Amy Mitchell


Explanation

This represents the core challenge of policy design that Mitchell identified as requiring more examination in the coming year


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.