Day 0 Event #258 Nowhere to Hide Accountability to Fight Global Ransomware

Day 0 Event #258 Nowhere to Hide Accountability to Fight Global Ransomware

Session at a glance

Summary

This panel discussion, titled “Nowhere to Hide, Accountability to Fight Global Ransomware,” brought together international experts to address the escalating global ransomware threat. Moderated by Giacomo Paoli Persi from UNIDIR, the panel featured representatives from Australia’s cyber affairs, El Salvador’s UN mission, Microsoft, and the Cyber Peace Institute. The discussion opened with alarming statistics showing ransomware attacks have increased by nearly 300% in the past year, with Microsoft tracking over 600 million cyber attacks daily.


Ambassador Brendan Dowling emphasized that ransomware has evolved from a cybersecurity issue into a national security threat, citing examples of attacks on small Pacific Island nations like Tonga’s health system and Australia’s Medibank incident affecting 10 million citizens. The panelists identified several key factors driving ransomware growth: the emergence of “ransomware as a service” models that lower barriers to entry, the use of cryptocurrency enabling anonymous payments, and the existence of safe havens where cybercriminals operate with impunity, particularly in Russia.


Julie Rodriguez Acosta highlighted how the attack on Costa Rica’s government infrastructure served as a wake-up call for Latin American nations, demonstrating ransomware’s potential to disrupt essential public services and undermine governance. The Cyber Peace Institute presented preliminary research findings showing that of 300 analyzed threat actors, 54% of those attributed were linked to Russia, with over 2,700 incidents recorded across 90 countries, primarily targeting healthcare and U.S. organizations.


The discussion emphasized that effective countermeasures require coordinated international cooperation, moving beyond viewing ransomware as merely a technical problem to recognizing it as a societal threat requiring whole-of-nation responses. Panelists stressed the importance of meaningful public-private partnerships, capacity building across different regions, and the need for states to implement stronger accountability mechanisms while supporting vulnerable organizations that lack cybersecurity resources.


Keypoints

## Major Discussion Points:


– **Ransomware as a National Security Threat**: The panel emphasized that ransomware has evolved beyond a cybersecurity issue to become a national security crisis affecting critical infrastructure, healthcare systems, and essential government services. Examples included attacks on Tonga’s National Health Information Service and Costa Rica’s government infrastructure, demonstrating how these attacks impact entire societies rather than just individual organizations.


– **Evolution of the Ransomware Ecosystem**: Speakers discussed how ransomware has become industrialized through “ransomware-as-a-service” models, lowering barriers to entry for cybercriminals. The threat landscape has been further complicated by cryptocurrency enabling anonymous payments, AI enhancing attack sophistication, and the emergence of specialized roles like initial access brokers.


– **Safe Havens and Attribution Challenges**: A significant focus was placed on how ransomware groups operate with impunity from certain jurisdictions, particularly Russia, where there are limited legal consequences. The panel discussed various accountability mechanisms including sanctions, law enforcement cooperation, and active disruption measures, while acknowledging their limitations.


– **Public-Private Collaboration Models**: The discussion explored successful partnerships between government and private sector entities, including Microsoft’s pilot program with Europol and Australia’s approach of embedding government cyber experts in private companies during incidents. The importance of information sharing and moving away from treating ransomware as a private sector problem was emphasized.


– **Data-Driven Analysis and Global Mapping**: The Cyber Peace Institute presented preliminary findings from their global ransomware mapping project, showing that 54% of attributed threat actors are linked to Russia, with healthcare being the most targeted sector. This research highlighted the need for evidence-based approaches to understanding and combating ransomware.


## Overall Purpose:


The discussion aimed to bring together diverse stakeholders (government officials, NGOs, private sector, and international organizations) to examine the evolving ransomware threat landscape and explore collaborative approaches to accountability, prevention, and response. The panel sought to move beyond viewing ransomware as merely a technical issue and instead frame it as a global security challenge requiring coordinated international action.


## Overall Tone:


The discussion maintained a serious and urgent tone throughout, reflecting the gravity of the ransomware threat. Speakers consistently emphasized the escalating nature of the problem and the inadequacy of current responses. While acknowledging some positive developments (like improved detection rates and international cooperation initiatives), the overall sentiment was one of concern about the growing sophistication and impact of ransomware attacks. The tone was collaborative and solution-oriented, with speakers building on each other’s points and emphasizing the need for multi-stakeholder cooperation, though there was an underlying frustration with the persistence of safe havens and the challenges of attribution and accountability.


Speakers

– **Giacomo Paoli Persi** – Head of the Security and Technology Program at the United Nations Institute for Disarmament Research (UNIDIR), Panel Moderator


– **Brendan Dowling** – Ambassador for Cyber Affairs and Critical Technology of Australia


– **Julie Rodríguez Acosta** – Minister Counselor for the Permanent Mission of El Salvador to the United Nations


– **Francesca Bosca** – Chief Strategy Officer at the Cyber Peace Institute


– **Chelsea Smethurst** – Director for Cyber Policy and Diplomacy at Microsoft


– **Nedalcho Mihay** – Cyber Threat Analyst with the Cyber Peace Institute


– **Vilda** – Criminologist (audience member who asked a question, identified herself as having written a master’s thesis on ransomware)


**Additional speakers:**


None identified beyond the speakers names list.


Full session report

# Comprehensive Report: “Nowhere to Hide, Accountability to Fight Global Ransomware” Panel Discussion


## Executive Summary


This panel discussion, moderated by Giacomo Paoli Persi from the United Nations Institute for Disarmament Research (UNIDIR), brought together international experts to address the escalating global ransomware crisis. The discussion featured Ambassador Brendan Dowling (Australia’s Ambassador for Cyber Affairs and Critical Technology), Julie Rodríguez Acosta (Minister Counselor for El Salvador’s UN Mission, participating remotely from New York), Francesca Bosca (Chief Strategy Officer at the Cyber Peace Institute), Chelsea Smethurst (Director for Cyber Policy and Diplomacy at Microsoft), and Nedalcho Mihay (Cyber Threat Analyst with the Cyber Peace Institute).


The panel opened with alarming statistics demonstrating significant increases in ransomware attacks, with Microsoft tracking over 600 million cyber attacks daily and 415,000 attacks per minute. The discussion fundamentally reframed ransomware from a technical cybersecurity issue into a comprehensive national security threat requiring whole-of-society responses, with vivid examples ranging from attacks on small Pacific Island nations to major incidents affecting millions of citizens.


## Opening Context and Threat Landscape


### Why Ransomware Persists Despite Awareness


Giacomo Paoli Persi opened the discussion by addressing a fundamental question: why ransomware continues to proliferate despite being a well-known threat. He identified three key factors: technology factors that make ransomware accessible, the availability of commercial off-the-shelf tools that lower barriers to entry, and systemic failures in countermeasures that allow the threat to persist.


### Unprecedented Growth Statistics


Chelsea Smethurst provided sobering statistics from Microsoft’s threat intelligence, revealing that the company tracks “over 600 million cyber attacks daily” and “415,000 attacks a minute.” She reported a 275% increase in ransomware usage over 12 months, establishing the dramatic scale of the current threat landscape.


Francesca Bosca supplemented these figures with financial data, noting that according to Chainalysis, “victims paid more than 1 billion US dollars in 2023” to ransomware operators, demonstrating the massive economic impact of these attacks.


## Ransomware as a National Security Threat


### Real-World Humanitarian Impact


Ambassador Brendan Dowling provided compelling examples that demonstrated how ransomware transcends traditional cybersecurity boundaries to become a humanitarian crisis. His description of the situation in Tonga was particularly striking: “Last week, the National Health Information Service in Tonga was shut down by a ransomware attack. We have deployed a team from Australia to assist them with recovery… At the moment, in hospitals in Tonga, people are using paper and pen to deliver healthcare to their people.”


Even more profound was Dowling’s account of the cascading social consequences from Australia’s Medibank incident, which affected 10 million citizens. He revealed that “we saw women and families facing domestic violence from partners who weren’t aware of the health treatment that their spouse or their mother or their sister had been seeking, and had to be moved to safe houses to escape violent partners or former partners.” This example powerfully illustrated how data breaches can trigger real-world violence and endanger lives.


### Impact on State Capacity and Governance


Julie Rodríguez Acosta provided insights into how ransomware affects state capacity, drawing on the experience of Costa Rica’s government infrastructure attack. She explained that such attacks can “disrupt essential public services and compromise the confidentiality of citizens’ personal data,” ultimately undermining “public trust in the state’s ability to secure digital systems.”


## The Ransomware-as-a-Service Ecosystem


### Industrialization of Cybercrime


Dowling provided detailed insights into the sophisticated business model behind modern ransomware operations, describing “a service industry where you can talk to a liaison person or a broker who will connect you with the person who will conduct the initial attack on a system.” He noted that “most ransomware groups will just take 20% of the profit” from attacks they facilitate.


Bosca highlighted the role of specialized actors within this ecosystem, including initial access brokers who sell access to compromised networks, facilitating ransomware deployment by other actors. This division of labor has significantly lowered barriers to entry for conducting sophisticated attacks.


## Data-Driven Analysis of Global Ransomware Patterns


### Cyber Peace Institute Research Findings


Despite technical difficulties with screen sharing, Nedalcho Mihay presented preliminary findings from the Cyber Peace Institute’s comprehensive global ransomware mapping project. Their analysis of 2,717 ransomware incidents across 90 countries revealed that over half targeted US organizations, with healthcare being the most affected sector.


The attribution data showed that while 52% of analyzed threat actors remain unattributed, among the 300 threat actors they could identify, 54% of those attributed were linked to Russia. This finding reinforced concerns about safe haven jurisdictions and the concentration of ransomware operations in specific geographic regions.


## Enabling Factors and Criminal Infrastructure


### Cryptocurrency as a Fundamental Enabler


Dowling made a striking assertion about cryptocurrency’s role: “this crime type didn’t exist before cryptocurrency. Cryptocurrency enabled the long-range launching of ransomware attacks across the globe.” This insight identified cryptocurrency as a fundamental enabler that transformed ransomware from a localized nuisance into a global threat.


### Safe Haven Jurisdictions


Throughout the discussion, speakers consistently identified safe haven jurisdictions as a critical enabling factor. Smethurst emphasized that “safe havens where ransomware groups operate with impunity, primarily in Russia, enable continued criminal activity.” The attribution data showing the concentration of threat actors linked to Russia reinforced this concern.


### Targeting Vulnerable Infrastructure


Chelsea Smethurst provided a crucial statistic: “over 90% of successful ransomware attacks target unmanaged devices,” highlighting how attackers focus on organizations with limited defensive capabilities. This targeting pattern creates a cycle where those least able to defend themselves become the most attractive targets.


## Response Mechanisms and International Cooperation


### Government Responses and Active Disruption


Dowling outlined Australia’s multi-faceted approach, which includes “financial sanctions, travel restrictions, and active disruption of ransomware infrastructure.” Australia has also implemented practical support measures, such as deploying assistance teams to help Tonga recover from ransomware attacks, and is introducing a ransomware payment reporting scheme.


### International Frameworks


Rodríguez Acosta emphasized the role of international frameworks, noting that “the UN framework for responsible state behavior includes norms about preventing malicious actors from operating with impunity.” She highlighted El Salvador’s advocacy for including ransomware discussions in UN mechanisms and leveraging international cooperation through UN, OAS, and bilateral partnerships.


## Public-Private Collaboration and Emerging Technologies


### Innovative Partnership Models


Smethurst described a pilot program with Europol announced “earlier this month,” representing novel approaches to combining private sector technical capabilities with government investigatory powers. Dowling emphasized the importance of “creating safe spaces for information sharing without regulatory consequences.”


### Artificial Intelligence and Future Threats


The discussion touched on AI’s dual role in ransomware. Rodríguez Acosta noted that AI enhances “sophistication of social engineering and phishing campaigns,” while Smethurst expressed interest in “how creative uses of artificial intelligence tools will evolve to counter ransomware in the coming years.”


### Blockchain Technology Questions


An audience question about blockchain technology revealed that speakers acknowledged their knowledge was outdated in this area. Bosca expressed specific interest in exploring “how to use blockchain for ransomware resistance and incident attribution” and “how you can integrate blockchain with AI for automated threat detection.”


## Capacity Building and Multi-Stakeholder Approaches


### Addressing Global Disparities


Bosca emphasized that “inclusive capacity building across different sectors and geographies is essential for meaningful collaboration,” noting significant disparities in cybersecurity capabilities. She advocated for expanding collaboration beyond government-private sector partnerships to include civil society organizations for “victim-centered responses and ethical frameworks.”


### Supporting Vulnerable Nations


The discussion highlighted how smaller nations address capability gaps through international cooperation. Rodríguez Acosta explained how “small nations like El Salvador leverage international cooperation through UN, OAS, and bilateral partnerships to combat ransomware.”


## Key Challenges and Future Directions


### Persistent Implementation Challenges


Despite broad agreement on the nature of the threat, several challenges remain unresolved. The non-cooperation of safe haven jurisdictions, particularly Russia, represents a significant ongoing obstacle. The development of scalable models for public-private collaboration that can be replicated globally also requires further work.


### Research and Development Needs


The discussion identified several areas requiring further attention, including the development of victim-centered response protocols, ethical frameworks for ransomware incidents, and research into emerging technologies like blockchain applications for cybersecurity.


## Conclusion


This comprehensive panel discussion successfully demonstrated the evolution of ransomware from a technical cybersecurity issue to a multifaceted crisis requiring coordinated international response. The speakers showed remarkable consensus on the nature and scale of the threat, while identifying practical approaches for enhanced collaboration and response.


The discussion’s strength lay in its integration of diverse perspectives from government, private sector, civil society, and international organizations. The vivid examples of real-world impact effectively demonstrated why ransomware requires urgent, comprehensive action that goes beyond traditional cybersecurity approaches.


Moving forward, the challenge lies in translating shared understanding into effective implementation, scaling successful collaboration models, and addressing the fundamental enablers that allow ransomware operations to continue with relative impunity. The innovative approaches discussed provide promising templates, but sustained international cooperation will be essential to address this evolving global threat.


Session transcript

Giacomo Paoli Persi: Good afternoon, ladies and gentlemen. It is my pleasure to welcome you to this panel titled Nowhere to Hide, Accountability to Fight Global Ransomware. My name is Giacomo Persi Paoli, I’m the head of the security and technology program at the United Nations Institute for Disarmament Research, UNIDIR, and I have the pleasure of being your moderator today. So welcome, whether you’re joining us here in person in Oslo or online, we really look forward to engaging with you throughout this event. If you’re following us here in the room, please be mindful that you have to wear your headset and we are actually broadcasting on channel five for this meeting. Over the course of the panel, there will be the opportunity for you to engage with our expert speakers and ask questions. If you are here in the room, you will see microphones at the periphery of the seating area and if you are online, please do submit your questions in the chat. We have a dedicated moderator that will be passing them on to me and then I will extend them to our expert speakers. So why ransomware? Well, ransomware has emerged as an urgent global challenge with attacks growing by nearly 300% last year alone. Now ransomware in itself is a new. So the question comes, how is it possible that despite the fact that we all know what ransomware is, it’s still having such a devastating impact on cyber security? How come that these percentages keep growing? And that’s probably a combination of different factors. On one side, there is definitely the technology factor, the technology factor that is making these ransomware campaigns more complex, more sophisticated, more difficult to detect, quicker to deploy at scale. There is also another evolution of the threat landscape, which is the emergence of commercial off-the-shelf ransomware tools or cybercrime as a service that has really broadened the base and lowered the barriers for cybercriminals that are willing to engage in this malicious behavior. So, on one side, we have definitely the threat that is continuously evolving and becoming increasingly complex. And on the other side, we probably have a failure, a systemic failure to find the right countermeasures to mitigate this threat. And these countermeasures start from basic cyber hygiene of individuals and they escalate up to organizational and governmental and intergovernmental responses. So, through the panel today, we’re really hoping to get different perspectives from speakers that are representatives of different stakeholder communities that can really help us understand better not only how is the threat evolving, but also what can we do to monitor, to detect and to respond to such a ubiquitous threat as is ransomware. So, I’m very happy to be joined by great speakers today. I will introduce them. They’re both here in the room and joining us online. Starting here on my immediate left, Brendan Dowling, the Ambassador for Cyber Affairs and Critical Technology of Australia. On his left, Francesca Bosco, Chief Strategy Officer at the Cyber Peace Institute. Further down the table, we have Chelsea Smethurst, Director for Cyber Policy and Diplomacy at Microsoft. And joining us online, I hope, Julie, you can hear me. It’s Julie Rodriguez Acosta, Minister Counselor for the Permanent Mission of El Salvador to the United Nations. So, we will give each speaker an opportunity to share some of their initial remarks. And then we have structured this panel through a series of questions. questions and answers. At any point, please do feel free to jump in. There will be hopefully a dedicated time towards the end to collect your questions, but particularly for following us online, do not wait until that moment to start writing them in the chat. It will make our life a lot easier if you, you know, proactively start to asking your questions. So I would like now to give the floor to Ambassador Dowling here on my left for his remarks, please.


Brendan Dowling: Thanks Giacomo and thanks everyone for joining us. Ransomware is the most prominent cybersecurity threat that we’re facing globally. As Giacomo just went through, it is a sophisticated industry. It’s not new, but it is getting more effective. There is more money being made and then there are more criminal groups taking advantage of this crime type. Importantly, the way that the ransomware ecosystem has developed means it no longer, you no longer need to be a sophisticated cyber criminal group to be able to conduct a ransomware attack. We have this service industry where you can talk to a liaison person or a broker who will connect you with the person who will conduct the initial attack on a system. There will be people who will fence your data, the data for you, who will conduct each element of the operation for you. So it is now an accessible crime type. And for most ransomware groups, they will just take 20% of the profit from the attack that you conduct. So it’s become democratised, industrialised, and it is ubiquitous. What we’re seeing, what we’re worried about is that ransomware groups seem to be targeting the more smaller, more vulnerable parts of our society. They’ve realised that attacking large infrastructure, like with the colonial pipeline attack, is bad for business. It’s actually more effective to conduct a higher volume of attack, even if you’re extracting A lower value ransom. What we’re seeing at the moment in Pacific Islands, some countries with populations fewer than 100,000 people are being targeted by cybercrime groups operating out of Russia. Last week, the National Health Information Service in Tonga was shut down by a ransomware attack. We have deployed a team from Australia to assist them with recovery, but it’s astonishing that in a country the size of Tonga, one of the most remote islands in the Pacific, is being targeted, not at their government or business level, but the National Health Information Service. At the moment, in hospitals in Tonga, people are using paper and pen to deliver healthcare to their people. Nurses are struggling to process and triage patients because of this attack. So for anyone who doubts how much of a scourge this crime type is globally, that is the sort of activity that we are seeing now. In Australia, we had an attack against the Medibank private health insurance company. 10 million Australians had their sensitive health data compromised. For anyone who thinks ransomware is a technical issue, out of that incident, we saw women and families facing domestic violence from partners who weren’t aware of the health treatment that their spouse or their mother or their sister had been seeking, and had to be moved to safe houses to escape violent partners or former partners. These are not cyber issues. These are not technical issues. These are whole of nation security and safety issues. What can we do about it? It’s really hard. This is a crime type that didn’t exist before cryptocurrency. Cryptocurrency enabled the long-range launching of ransomware attacks across the globe. So that financial… Financial innovation has made finding this crime much more difficult. We need better access to crypto exchanges, to sharing intelligence amongst national jurisdictions to try and disrupt those parts of the ecosystem. This is a crime type that relies on a lot of brokers, a lot of middle operators who make this system functional. We need to get better at disrupting the entire ecosystem. In Australia, we apply financial and travel sanctions against cybercrime actors. This is an important measure, but it’s a limited measure. We also engage in hard disruption of the ecosystem. Earlier this year, we fried the servers of the people who hosted the data in the ransomware attack against Medibank. But this crime type thrives because too many jurisdictions are not doing enough about it. National groups are operating out of safe harbours, safe jurisdictions, where there are few legal consequences. Primarily, these groups are operating out of Russia, not solely, but we need jurisdictions to take this more seriously. That’s why we supported mechanisms like the Cybercrime Convention to try and get more national jurisdictions to cooperate and work together to combat this crime type. Finally, attacks succeed because of basic vulnerabilities. It would be excellent if cybercriminals were forced to use their most sophisticated techniques, but they can get by exploiting common or known vulnerabilities because we’re not doing enough to patch, because technology companies are not making it easy enough to upgrade software and to replace end-of-life hardware. This needs to be a global response to hit all aspects of both the ecosystem in which this crime type thrives, but also how we better build up our resilience. We also need to talk about it more openly. There is a sense of shame amongst businesses or organisations or entities that no one wants to be open about this. And so I get attacked today and my neighbour gets attacked tomorrow. because I didn’t share the information about it. So this is a really important conversation. This crime type is getting worse. It is targeting the most vulnerable. And at the moment we are not winning.


Giacomo Paoli Persi: Thank you. Thank you ambassador for starting us off, like touching on many points that I’m sure will be picked up by speakers in their remarks and definitely during our Q and A. I would like now to pivot online and welcome Julia connecting from New York. I hope you can hear me and see us okay. And Julia, if you’re ready, the floor is yours.


Julie Rodríguez Acosta: Thank you so much. I hope that you can see me and listen to me okay. Greetings for the hot New York City. Today is really, really hot. Let me begin by extending my sincere appreciation to the organizers for convening this timely and important discussions. I cannot think of a better group of stakeholders to reflect on how we can collectively counter the impacts of one of the most pressing information security threats of all time. My first point is that as I just mentioned, cyber crime is ransomware is just not a cyber crime. It has effectively evolved into a national security crisis around the globe. And its consequences are tangible and personal and affects individuals like you and me. Business, hospitals, schools, local governments, they all have been targets. No one is immune. So beyond these immediate impacts, ransomware also has broader implications for international peace and security, including its potential risks to the financing of weapons of mass destruction. So in this context, the United Nations continues to offer a platform to advance dialogue, promote international cooperation and build collective responses. Notably, ransomware was not included in the first annual progress report of the Open and Working Group that is currently addressing these issues in 2022. And as I say, ransomware. So, despite growing concern expressed by many delegations during that year’s discussions on existential threats to information security, El Salvador was among the groups of countries that advocated for its inclusions, and we were pleased to see ransomware formally acknowledged in the second Progress Annual Report. So, the ransomware attack that crippled Costa Rica’s government infrastructure set off a wake-up call for many. It demonstrated how ransomware can affect not only institutions, but also states’ ability to deliver essential services and maintain governance. Since then, El Salvador has consistently advocated for a strong language that addresses ransomware directly, especially as we face new threats exacerbated by other emerging technologies like artificial intelligence. I was just, as was just mentioned, AI has enhanced the sophistication of social engineering and phishing campaigns, further expanding the ransomware threat landscape. We also support language reflecting concern over the rise of ransomware as a service model that allow individuals without technical backgrounds to launch highly disruptive attacks. This evolving business model significantly lowers the barrier to entry for cyber criminals and amplifies the capabilities of more technical, sophisticated actors. The threat to critical infrastructure and its potential implications for international peace and security must not be underestimated. We also have supported advancing a more holistic view of the ransomware ecosystem, one that includes effective prosecution, disruption of technical enablers, and also breaking the financial cycle that sustained the threat. One of the favorite elements that was introduced in the recent discussion is the recognition of the importance of a human-centric approach, one that prioritizes understanding and addressing the real-world impacts of individuals and communities. So still much remains to be done, from improving international cooperation and victim support to strengthening deterrence mechanisms, and also the adoption of common standards. I will stop here, but definitely I will look forward to hearing the perspective of other speakers and continue this critical conversation, and thank you so much for having me online.


Giacomo Paoli Persi: Thank you, Julia, for sharing your initial remarks. We’ll come back to you with a couple of questions. But now I would like to move to Francesca from the Cyber Peace Institute. We’ve heard already with the first two interventions how one of the main challenges about countering ransomware is actually our ability to track ransomware initiatives or campaigns and trace the various actors and their malicious actions. So Cyber Peace Institute has been working on something on this topic. So over to you.


Francesca Bosca: Thank you so much and thanks a lot to the organizers and to Giacomo’s moderator. It’s a pleasure to contribute to today’s discussion. Allow me indeed to give a bit of context on the work of the Institute to give also some food for thought for the discussion. The Cyber Peace Institute is an international non-governmental organization that is devoted to reduce the harms from cyber attacks on people’s lives by assisting vulnerable communities. And we do this in a very concrete way starting by analyzing cyber threats, hence also the participation today in advocating for responsible behavior in cyberspace based on the evidence that we gather. At the core what we do is indeed we conduct in-depth analysis of cyber incidents and thanks to this knowledge we both provide the free cybersecurity support to other civil society organizations and under-resourced organizations. And we use this knowledge to engage in international forums also like this one to promote a responsible behavior in cyberspace, emphasizing the human-centric approach and advocating for the protection of fundamental rights and freedoms online. And also by monitoring emerging technologies like Julie was just mentioning, artificial intelligence, also we anticipate how future cyber threats might impact on the threat landscape of vulnerable communities. As a tangible example of how we work and building on the excellent remarks that Giacomo, the Ambassador and Julie just mentioned on the prevalence of ransomware, giving also some concrete examples, we would like to contribute… Thank you to all of you for joining us today. Considering the persistence of ransomware threat actors and the increasing harm caused by ransomware operations worldwide, at the end of March we decided to have a sort of threat-focused type of analysis and type of work, which is the project that we are currently doing. Phase one is a global mapping of ransomware threat actors, their geographies, affiliations and targets, providing basically evidence-based support to stronger multilateral actions. And then phase two will evaluate the state compliance with the UN cyber norms, judicial cooperation, and the misuse also of the technical infrastructure, paving the way for more accountability mechanisms. If you allow me still, let’s say, five minutes, we would like to share with you the very first initial findings. And to do this, I’m joined by my colleague, Nadelcho, online, who will present the preliminary findings from our work. Nadelcho, the floor is yours.


Nedalcho Mihay: Hello. Thank you, Francesca. It’s a pleasure being here. And without further ado, I’ll share my screen now. Thanks. Just to make sure you can see it. Yeah. Is that good?


Giacomo Paoli Persi: Not yet, but I’m sure it will come soon. We still cannot see your screen, Nadelcho.


Francesca Bosca: We can see your name, but not your screen.


Nedalcho Mihay: I’m sorry. I don’t know.


Giacomo Paoli Persi: I’d like to then start the transition towards the more interactive part of the discussion but I’m also looking at our colleagues in the back that are taking care of the tech. Whenever you are ready to show the screen, please just flag and we will go back to Nadellcio. So Chelsea, I’d like to come to you, first of all to thank you and Microsoft for convening this event and for the leadership that Microsoft has been showcasing in really promoting multi-stakeholder discussions on this interesting and important topic. I see that perhaps the screen issue has been resolved but since we kind of see online an infinite repetition of the same screen, while the technology is still being sorted, perhaps I come to you Chelsea with the first question and then we can go back to Nadellcio, which is how has the kind of global ransomware threat evolved in recent years and what trends are most concerning today and this may be actually a very good introduction to then what Nadellcio is going to show.


Chelsea Smethurst: Yeah, fantastic. Thank you for inviting me. So I think just briefly, Microsoft produces a digital defence report annually, usually in October of every year and what we’ve seen in terms of year-over-year changes for ransomware is a whopping 275% change just in the last 12 months in terms of increase in use of ransomware and there’s really been two sort of accompanying trends that have gone with this ransomware. J.M. Gannett, The New York Times. And I think the most significant thing is that we have seen two kinds of ransomware uptick that we’ve seen. One is, while we’ve seen the 275% use of over the last 12 months, we’ve also gotten better as a collective industry at actually defending against ransomware. And so we’ve seen, in terms of quantitative numbers, a 300% decrease in the overarching amount of ransomware that has gotten to the encryption phase, so what I sort of call the lockout phase. And that’s really significant, because once you get there, you’re really at the behest of the cybercriminals whom are using the ransomware. I think, secondly, one other point I’ll make, too, is that while we’ve seen some positive numbers to account for that really large increase in the use of ransomware, what we have not seen is that over 90% of successful ransomwares really attack unmanaged devices. And so these are very much your entities, like hospitals or NGOs, which will continue to be targeted because they have access to fewer resources. And so really thinking about what is the collective capacity to address ransomware, we’re really only as strong as our weakest link, and I think this is very true in the ransomware domain. So I think this is a little bit context, and I’d be happy to switch it over to the CyberPeace team now, because those give you a little bit of numbers in terms of what we’re seeing in the last 12 months with ransomware.


Giacomo Paoli Persi: Thank you, Chelsea, for this initial introduction, at least, into the threat landscape. Let’s try to go back online to Nadelche and see if we’re now in a position to share your screen and see your slides. I think it should work now. Yes, I confirm we can see it. Thank you.


Nedalcho Mihay: OK, thank you. Yeah, just an introduction. My name is Nadelche Mihailos, and I’m a cyber threat analyst with the CyberPeace Institute, with which I’ve been heavily involved in working with the incident tracer platforms. So as Francesco mentioned, the project consists of two phases. So I’ll skip this, and I’ll just go into the aims and objectives of the study. The aim is to compile a… the Statistically Representative Dataset on Global Ransomware Activity including the targets of ransomware attacks and the names and locations of ransomware threat actors and we had two objectives, the first is to create a database of threat actor profiles including the name of the threat actor, associated ransomware and location country and the second objective is to create a database of global ransomware incidents including target location, target sector and threat actor name Now I just want to very briefly touch upon our research methodology as it is a central part in the work of the analysis team so we start with the analytical questions and key terminology we create the data collection schemas for both the research and threat actors and incidents we define the key sources and then we document the limitations of our work which mainly revolve around the usual constraints of open source research and the current limits of AI and LLM as we incorporate automation in every step so the research was mainly guided by four analytical questions which threat actors have been responsible for the development, deployment or facilitation of ransomware operations, second ransomware threat actors operate from, originate in or are located in which countries or regions, third what open source indicators so that would be personnel, linguistic patterns, technical infrastructure contribute to the geographical attribution of ransomware threat actors and finally which locations and sectors have been most frequently targeted by ransomware attacks Now for data sources we use data shared by partners or gathered through open source intelligence in both structured and unstructured format and as we have incorporated all of our previous research from our cyber incident tracers that could have impacted the results of the data collection so initial analysis and findings we have analyzed information on around 300 threat actors 52% of them remain unattributed to a specific geographic location of the attributed threat actors 54% are linked to Russia followed by 8% linked to Iran, 7% to China in terms of the data collection on global ransomware incidents we have collected information on 2,717 incidents conducted by 184 threat actors against organizations in 22 sectors across 90 countries. More than half of all attacks were attacks against organizations in the United States. And more than a third were attacks against the healthcare sector, followed by non-profits and the ICT, with the top three most active threat actors in our database being LockBit, BlackCat, and Rebel. And the following slides illustrate how one of our graph analysis tools helps us with data analysis and visualization. The first one is an analysis of all incidents pivoted around targeted countries. The second one is a visualization of our research into threat actors, which have been grouped and mapped to countries they are connected to. You will notice that some actors appear linked to several countries, either because members were arrested in multiple jurisdictions or because several open source indicators connect them to more than one country. And finally, the last two slides present a simplified dashboard view of our initial results. First one are the results of our analysis into the targets of ransomware attacks. And the second are the results into the analysis of the perpetrators of ransomware attacks. On the top right, you can see the distribution of threat actors, connections to geographic locations. And on the bottom left, you can see the distribution of threat actors among the global ransomware incidents database. Thank you.


Giacomo Paoli Persi: Thank you, Nadelchev, for this inspiring presentation and being representative of the research community. I’m always in favor of bringing more evidence and data-driven decision-making to the table. So thank you so much to you and to the Cyber Peace Institute for this initiative. Really looking forward to see how it evolves. And before we go back now to the panelists and… and continue with our questions. I just wanted to remind colleagues online that you can start asking your questions if you want to use in the chat. We have Michael Karamean from Microsoft that is our great online moderator and he will make sure that those questions reach me here in the room. Ambassador, I would like to come back to you and also to you, Julia, because you both alluded to or mentioned the fact that ransomware is not just a cybercriminal behaviour, but it can escalate, it can reach the threshold of being a national security threat or at the very least a national security concern for a variety of reasons. Would you mind elaborate on your perspective on this?


Brendan Dowling: I think it’s a really important way of framing the issue. I think cybercriminals flourished in a context where we thought about ransomware as a cybersecurity issue that our CISOs or our ICT teams needed to be conscious of. But as we’ve seen the ramifications from ransomware attack resonate and ripple through society, I think increasingly we have to be conscious that these are not confined, they’re not purely cyber incidents, these are whole of nation incidents which governments need to take much more seriously. I think the important part of that message is when a entity, an organisation or a business is attacked, it shouldn’t be seen as just something that affects that business. Oftentimes the externalities of a ransomware attack are born by the community, they’re born by the broader government, they’re not just about the effect on that business. As I said before, if businesses or entities don’t talk or share information about their attacks, that actually… impedes the ability of their competitors or other people in the industry to protect themselves. So we need to start seeing ransomware as a much broader national threat to say, one, not only is it okay to talk about these types of attacks if they hit you, but actually we need you to do that to better protect our citizenry, to better protect our nation. So we’re doing a lot in Australia to drive that behaviour, increasing our expectations on industry to report attacks, making clear that if you seek assistance from the Australian Cyber Security Centre, that is not a bad thing, that is not something to be ashamed of, actually it’s a trusted government entity that can help you out. But when we see these attacks affecting society so broadly, it needs to be a whole-of-society response, not just something that’s seen as a manageable keep-it-within-yourself, it only affects you sort of attitude. So it’s taken us too long to get to this point, but now I think we’re realising because of the scale of the attacks that this is a national security threat that requires national and global responses. Thank you.


Giacomo Paoli Persi: Julia, I would like to come back to you as well, because in your remarks you also highlighted how, to some degree, what happened in Costa Rica was a wake-up call for many governments, and you alluded to the fact that even in El Salvador you’ve started to take proactive action and initiatives with respect to ransomware. So would you mind elaborating a little bit how you see ransomware as a potential national security concern?


Julie Rodríguez Acosta: Thank you so much, Jacomo. And yes, following on the remarks just delivered by Ambassador Dolin, first we see an increased number of ransomware attacks targeting critical infrastructure. So this is very concerning. These attacks, as it was mentioned, go beyond financial motivations and represent a clear violation of what we have as the guideline of responsible state behavior. So while many of these attacks really fall under the realm of cybercrime, there is growing evidence of the state-linked ransomware operations that are conducted with certain tacit state tolerance. So we even see cases where ransomware has been used, not primarily for financial gain, but as a vector to conduct denial-of-service attacks that affect the availability of system and national space. So this is linked with the case of Costa Rica, which was the first time that a national government was directly targeted in such a way. So this attack disrupted essential public services and compromised the confidentiality of citizens’ personal data. So beyond the first impact, it undermines public trust in the state’s ability to secure a digital system. And this is especially worrisome as all governments are trying to increase how they can digitalize public services. So there, and I mentioned this a little bit in my initial remarks, we also see linkages between ransomware and broader security concerns, particularly by the theft of cryptocurrency, as was mentioned by Ambassador Dowling. In some cases, these stolen assets have been reportedly being used to fund weapons of mass destruction programs and their delivery systems. And this is a direct threat to international peace and security. Also, the use of cryptocurrency complicates attribution and prosecution, making it more difficult to hold perpetrators accountable. So these are just some examples on how ransomware really intersects not only with national security, but also with broader international security architecture. And this evolving threat landscape demands close coordination between governments and multilateral institutions and other stakeholders, as it was mentioned by the


Giacomo Paoli Persi: Thank you, Julia. I’d like to go back to Chelsea and Francesca because both Microsoft and CPI, in a different way, you collect a lot of data and you have visibility in a way that perhaps other organizations don’t. So I would like to go back to where we started, which was with the recognition of how ransomware is increasing and the number of ransomware attacks has been growing significantly over the past 12 months. So based on the data that you have collected as a business, Chelsea, or as an organization that focuses on open source data with CPI, what can you share with us around the reasons behind why we’ve seen these numbers grow so much? Perhaps you can start, Chelsea, and then we’ll go to Francesca.


Chelsea Smethurst: Great, thank you. So I think there’s really three trends, but I’ll start with some sobering numbers. So at Microsoft, we track over 600 million cyber attacks daily. And if you break that down to a minute by minute basis, you’re looking at somewhere around 415,000 attacks a minute. And that’s just us as a company and what we have purview and visibility into. And so we’re up against a pretty large mountain, right, in terms of cyber attacks. But I would say there’s probably three things specific to ransomware that we’re really seeing on the Microsoft side. One is what we call ransomware as a service. And this is essentially a product, right? So this does two things. It lowers the barrier of entry for cyber criminals who want to use these tools and techniques because it’s easier, frankly. And then secondly, it allows scalability. So if it’s easy to just download a software and click a button and then get money and pay it out from it, you’re going to be able to do it, right? So that’s another reason why we see an uptick in the use of these technologies. And then secondly, right, the other thing I’ll mention, and it’s been mentioned by a couple of our panelists today, is the rise of cryptocurrency. And this is problematic for two reasons, right? It’s easy to get paid for these ransomware attacks. And then secondly, it’s really harder to track. And with anything with cybersecurity, if you can’t assign accountability and transparency, it’s really hard to really deter these attacks, right? If you can sort of hide behind your actions and it’s difficult to track. But I think finally, really the third and probably the most important factor in this issue is what we call safe havens. So these are geographic entities where, you know, you can actually base out of ransomware attacks against international victims, but they’re really not held accountable at the legal and international level. And it’s really difficult from an industry perspective to really target and sort of minimize these, what I call safe haven opportunities for ransomware. And so this is an area where I would like to see, I think, more collective international cooperation across both the private sector and also governments. And that’s something that I think we’ll see more of in the future.


Giacomo Paoli Persi: Thank you. Francesca?


Francesca Bosca: Yeah, maybe I can. So I guess some of the points were already made and maybe just on the first one, meaning the ease of access to tools and the rise of, let’s say, ransomware as a service that all the previous speakers mentioned, indeed, I would say, potentially also in a way amplified and enhanced by artificial intelligence and emerging technologies. So that this is definitely something that will impact the cryptocurrency ecosystem and the sort of widespread availability and relative anonymity of cryptocurrencies facilitate, obviously, the ransom payment and obviously is leveraged by perpetrator. The safe havens was mentioned. Maybe what I can add is something that was mentioned, I think, also before by Julian, and it’s an excellent observation, which is the, in a way, the expanding global digital footprint, especially with remote work, purely secure system and legacy infrastructure provides more vulnerability for threat actors to exploit. But this means also that they are trying, let’s say, to, in a way, optimize the way they work and use the same infrastructure basically for launching different type of criminal activities. So, and this is why the second phase of the program will focus specifically on exploitable and exploited, I would say, infrastructure, which is something that is also, I mean, not so well, I would say, or not so much investigated. And an output of this mapping will be able to demonstrate that the same infrastructure basically is likely used, for example, for other crimes beyond ransomware. And allow me to mention two other factors that we see when it comes to the, why, let’s say, the increase. There is also a sort of thriving, what we call initial access broker markets, meaning that you have brokers that specialize in obtaining and selling access to compromised networks, often of high value organizations, which ransomware groups basically exploit to deploy their malware. So it’s a sort of like cyber-organized crime form of activity, but with a very specialized, let’s say, professionals at the beginning, providing ransomware operators with the data that they need to then carry out the attack. And then let’s not forget another very important point. We’ve seen ransomware groups shifting from, let’s say, opportunistic attacks, so launching widespread attacks against, let’s say, as many individuals as possible, to more strategically targeting critical infrastructure, like, for example, healthcare, education, even civil society with limited cybersecurity resilience, but high sensitivity to disruption. And that’s interesting because, I mean, I was checking the, still the criminal profits hit records high because, according to Chainalysis, a victim paid more than 1 billion of US dollars in 2023, facilitated through cryptocurrency, which means that still criminals are getting quite some profit out of it.


Giacomo Paoli Persi: Thank you. If we have time at the end, I’d like to go back to this kind of driver discussion, because, you know, in basic kind of criminal studies, you know that criminals need I think all normatives need means and need opportunities in order to perpetrate their crime. And there is a lot of discussion around the means and how the means are evolving, whether it is technology, whether it is cryptocurrency, whether it is permissive regulatory regimes that allow them to or enable them to do what they do. But I don’t think there is necessarily a lot of, or enough, focus on opportunities, which is what are the weaknesses of the system that they can then exploit in order to. And one could argue the regulatory one is probably a hybrid between both a means and an opportunity. But if we have time, I’d like to discuss more. But going back to a point that Chelsea mentioned around safe havens, I would like to come back to you, Brendan, about what mechanisms currently exist to hold states accountable when ransomware groups operate with impunity within their borders. So what can states do?


Brendan Dowling: It’s a tough one. We have an established norm on this issue that was agreed as part of the 11 norms of responsible state behaviour, which essentially said states should take responsibility to prevent malicious cyber actors from operating with impunity in their territory. But we still see this happening quite commonly. We then look to what international measures do we have that can help us address that issue? One is bilateral. We engage with several attacks that have been launched from Russian territory against Australia or partners in the region. We engage with the Russian government and we make clear that we expect action to be taken against these actors. Usually there is no response. So a big part of that problem is that we have a government that is not taking seriously and is in fact likely profiting from some of the criminal activity. We’ve then used sanctions to try and target the people who are behind the attacks. These are a limited measure, they do have an impact, they do have a deterrent, but the problem of attribution is a challenge and then sanctions, if a person does not have financial assets in your country, are always going to have limited effect. Law enforcement responses have to be part of the response. When we do find cyber criminals in jurisdictions who will cooperate, ensuring the digital evidence is made available to support successful law enforcement and prosecution. And that’s where the Budapest Convention, where the Cybercrime Convention, will hopefully bring more states to take seriously legal measures to combat cyber criminals who may come across their jurisdiction. Finally, we consider that disruption measures have to be part of their solution. When you’ve exhausted all other avenues to achieve a law enforcement response, when you have safe havens where people are operating from with impunity, finding active disruption measures to take down infrastructure, to throw sand in the gears to make life harder for these actors has to be an important part of the response. Again, challenging, time-consuming, attribution can be a difficulty, but we have had success against groups like Lockbit, where there has been significant enough impacts on their infrastructure to disrupt their operations for some time. The Counter Ransomware Initiative, I think, has been a really effective grouping that has brought countries together to talk about building up cooperation to combat ransomware. That’s still a work in progress, but I think a much broader church of countries are coming together to take this seriously. So, we’re going in the right direction. Here the sort of figures that Chelsea shared. There is a long way to go to seriously put a dent in this crime.


Giacomo Paoli Persi: Thank you. I’d like to come back to you, Julia, to look at more of the multilateral side of this equation. But before I do, I just wanted to share with perhaps Chelsea and Francesca an interesting question that came online, so you have the time to think about it, while Julia gives us her multilateral perspective. And the question reads, is there any research on how blockchain deployment is correlated to the mitigation of cyber threats? If no, how do we promote this research topic? And if yes, what is the outcome? So anything that you can think of related to the use of blockchain in this context would be very, very useful. But now, Julia, coming back to you, what role do you think should the UN play, not only in establishing norms, but also potentially in establishing frameworks for state accountability in cyberspace?


Julie Rodríguez Acosta: Thank you so much, Giacomo. And as Ambassador Dowling just highlighted, at the UN we have this framework for responsible state behavior that basically outlines expectations on how states should act in cyberspace, and includes voluntary non-binding norms, reaffirmation of the applicability of international law, and also building. So one of the key principles is that critical infrastructure must be protected and respected, and is effectively off limits. However, while the framework says the reality on the ground tells us a different story, as we see from this Cyber Peace Institute research, data and reporting continue to show a rise in hostility and pervasive cyber activity, including ransomware attacks that often target the very infrastructure meant to be protected. So the UN… And then it should continue to play a central role in promoting the implementation of these norms, encouraging the state to take operational actions at the technical level to enhance compliance, and this includes the advancement in cooperative measures, information sharing, joint investigations, but also reinforcing norms that clearly outline unacceptable behaviors, especially those that undermine trust, security, and stability in cyberspace. So more broadly, international community must work together to disrupt the ransomware business model and build resilience. You see, national policies, laws, and technical capabilities are not enough to address what is inherently a transnational threat, and no country can tackle this challenge in isolation, and thus we promote that we do this also through multilateral forums. So yes, you know, kind of rounding up, more international cooperation is needed, but it must be cooperation that is practical and action-oriented and focuses on disrupt, deter, and prepare, and have more effective response mechanisms so they can leverage.


Giacomo Paoli Persi: Thank you, Julia, for your perspective, and also to give Chelsea and Francesco a couple more minutes to think about this. I also thought I’d add, you know, being at UNIDIR, I have the privilege of having seen and having witnessed how the UN discussions have evolved, and we’re now getting to the point where the current open-ended working group is wrapping up its five-year mandate, and we are about to enter a new, a future permanent mechanism with some details already being agreed on and others still being up for negotiation, but it looks like that this new mechanism will have at least potentially the opportunity to really go focus more on the implementation of all the existing commitments that are already in place. And if we accept that beyond all commitments there has to be the political will to implement them, and if we take that as a given, because if there isn’t, then there is nothing really, no practical measure can work without the political will and commitment to implement it. But if we take that political commitment as a given, then I think there are a number of issues that can, where the UN and the multilateral approaches can really help, whether it is, as Julia was mentioning, Some states may not even be aware that their territory is being used as a safe or as a as a kind of a staging ground for ransomware campaigns. Or some may be aware but maybe don’t have the means to do anything with it. Technological means but also legal means because maybe they don’t have a national legislation that allows them to intervene. And all of these things, despite the fact that we’re talking about ransomware and cybersecurity which make them feel like new, they are not new in the UN system. There are many conventions that have been negotiated before that then have followed with practical instruments and measures that have been developed in order to help states implement them and comply with the commitments. So you know perhaps there will be an opportunity to develop like a model law or a model legislation for those countries that need to adopt some sort of regulatory measures at the national level that would enable them to then intervene and disrupt a ransomware campaign emanating from their territory. These things, you know, you need to have legal coverage to do certain things. If you want to share evidence with your neighbor, if you want to cooperate, these require very well-developed regulatory frameworks or cooperation mechanisms that would require a little bit of assistance in developing. And with that, I turn back to Francesca and Chelsea and ask if you had the chance to think about the topic of blockchain and whether or not you are aware of any work or any research that has been conducted to explore the extent to which it can be helpful in this context.


Chelsea Smethurst: I can go next. Go. Great. So I’m not aware of the latest art around sort of cryptocurrency and Bitcoin and blockchain research, but just one brief point I’ll make is that cryptocurrencies are ultimately based on blockchain technology, right? And so if cryptocurrencies and financial transactions are actually processed through exchange. Thank you so much for joining us today, and I’m sure there’s a lot more to sort of assess on that topic, but it is an interesting part of the technology block or the technology platform that can be used for both positive means, right, but also criminal means too. So good question. I’m looking forward to Francisco’s points on this.


Francesca Bosca: That’s interesting because it’s a topic close to my heart because it was like two jobs ago I left, let’s say, when I was doing research on blockchain. So I mean, provided that it’s a little bit outdated information and I would need to, let’s say, to look into that again. So there are a couple of aspects, one from, let’s say, from a technical standpoint, I would say that I do see, and I remember, I mean, doing some research on how, for example, blockchain can be used as a sort of like, not, let’s say, the black sheep when it comes to cybersecurity, but on the opposite, and there are some practical implementation areas I’m thinking about, like the threat intel sharing, for example, that can be extremely beneficial when we think about cybersecurity. I’m thinking about identity and access management, for example, where obviously the decentralization of digital identities can help, for example, when it comes to decentralized security. So thanks to the distributed architecture and the consensus mechanisms, and obviously the fact that you have, I mean, the key strength of the blockchain resides basically in the immutable data ledger. I mean, obviously you can improve the audit trails and the data integrity. Again, outdated information, but I would suggest that there was. There were a couple of things that came to my mind. ENISA, so the EU agency, did some interesting work back in before COVID. So in 2019 on distributed ledger technology and cyber security. And there is also a work done by the World Economic Forum on blockchain cyber security in again in 2020. So these are the only two that, I mean, came to my mind. But again, because my knowledge is a little bit outdated. Can I just mention one thing where I can see it’s a very good question also, because I think it also helps us in thinking about, let’s say, potential future direction. What I would be really interested in seeing is, for example, how to use blockchain for ransomware resistance and incident attribution. And one interesting aspect that is kind of like collateral is also how you can integrate blockchain with an AI for automated threat detection as well.


Giacomo Paoli Persi: Thank you. And we may go back to the more general topic of which technologies exist out there that could help. But I’m also conscious of the time. And before I continue, we probably were a little bit too ambitious with the number of questions we have prepared. I’m conscious of the time that we have, 12 minutes before we have to wrap up this interesting session. So I also wanted to make sure I give the opportunity to colleagues in the room. If there is anyone who would like to ask a question, I see one. If you can please reach for the microphone on your right and introduce yourself, please, before asking the question.


Vilda: Yes, thank you. Can you hear me?


Giacomo Paoli Persi: Yes.


Vilda: Perfect. Thank you so much for an excellent panel. My name is Vilda. And I think ransomware is such an interesting type of crime. And as a criminologist, I’ll allow myself to say that it’s maybe my favorite kind of crime, at least from an academic perspective. And I have a question for Julie. and Brendan who’s tackling cyber crime from like a government sector because I wrote my master’s thesis on on ransomware and one of the many many interesting and unique aspects of ransomware is that it’s as far as I can tell the only type of crime or cyber crimes the only type of crime where the private sector is dominating both on crime prevention but also handling the incident and dealing with the aftermath so I was just wondering coming from a government perspective how in your in your respective countries how are you dealing with that sort of cooperation with the government and and the private sector, thank you.


Brendan Dowling: It’s really interesting question and you you’re right in cyber so much of the front line is in the hands of the private sector and in no other form of crime or attack type would we say oh well that’s that’s kind of completely a responsibility of the private sector and whether they tell anyone about it is their business and they’re on their own to kind of assist you with that so that does make a much more challenging environment some of the things that we’ve done in Australia to try and without using compulsion but to try and build a far more collective response to these crime types when we had a two major cyber incidents affecting millions of Australians going back to 2022 the government came out and very publicly engaged with those companies sent teams of police officers and cyber security experts from the government to sit in the headquarters of those companies and to provide assistance to launch the investigations in a very collaborative way now that was those were very large-scale incidents so that’s We have tried to create an environment where we normalise engaging with the government as soon as there is an incident, that sharing information with the government to aid in the response is not just a nice thing to do, but is actually the expected thing to do. We have introduced legislation that says if you as a private company engage with our cyber security centre, the information you share with them will not be used for regulatory purposes so you can trust there is a safe space to engage and seek that assistance. And now we are introducing a mandatory ransomware payment reporting scheme. So in all these measures we are trying to create an environment where it is not seen as purely something for a business or an entity to manage, it is seen as something that needs a collective response and that active engagement with the government, with law enforcement, with our cyber security experts is a normal way of responding to these incidents. It will take time, but I think it actually improves when that type of behaviour is modelled well by companies. Once some companies start to do this and it becomes a new norm that this is how you engage, that creates an environment where others are doing the same. So we are trying to make all these efforts to normalise that it becomes a collective response rather than just something that is dealt with in isolation.


Giacomo Paoli Persi: Thank you. Julia, would you like to come in on this question?


Julie Rodríguez Acosta: Yes, thank you so much for the question and I think this is very pertinent and I would like to provide some insights from the Global South perspective. I think this is very pertinent because there is often this idea that Ramswell were only targets large enterprises or companies that can afford to pay substantial ransom, but in reality, you know, small organisations around the globe are affected and the consequences for these small organisations are often massive. On a national level, we have this kind of like multi-stakeholder cooperation. Of course, as a government, we have enacted laws on cybersecurity and very recently on data protection because ransomware, often, you know, it’s related with data theft. So we wanted to make sure that we have in place all these laws that also protect personal data and privacy. And then, of course, these linkages with private industry, law enforcement agencies, and of course, as a small nation, we leverage a lot of cooperation that we can build through, you know, entities like the United Nations. We also have a lot of work with regional organizations like, for example, OAS. So we kind of like pivot everything that has been done in the international level that can help us. And then a lot, of course, we rely, as I was mentioning before, on a lot of bilateral cooperation, trying to learn for those who say that have more advanced capabilities to fight this threat. But as we have, you know, highlighted throughout the panel, it is global. So it is in best interest of all, you know, to have tried to leverage that level of cooperation so we all can combat the encounter run forward.


Giacomo Paoli Persi: Thank you. And actually, I would like to take this question and link it to the rest of the panel, because we did have in our list a question around successful models for public-private collaboration on cybersecurity that could potentially be scaled. So again, going back to you, Chelsea, and Francesca, you know, you have seen probably many different configurations of how the public and the private sector work together. What are some of the most successful? successful stories that you’ve seen or some of the models that you think could be used as an inspiration.


Chelsea Smethurst: So I’ll go ahead and start, but I really liked your question from the audience, but I’ll briefly say, so just earlier this month Microsoft actually announced a pilot program with Europol to integrate our digital crimes investigators into their European cybercrime center in The Hague, and I think these sort of novel model public-private partnerships are an interesting thing to try out across different sectors, right? Because then you’re marrying both the private sector expertise and sort of the front lines that we see in ransomware with the legal and investigatory powers of states and governments, and that’s a very powerful tool and I’d like to sort of see those models as applicable be replicated across different environments, but it’s a really great question and I think more to be seen on if this model with Europol will scale and also be successful, but I think just willingness to try to partner between private sectors and governments is a really great attempt, so.


Francesca Bosca: And maybe the other one that comes to mind is the ransomware task force, which I think see involved, I mean, it’s a multi-stakeholder effort with participation from across government, industry, civil society, and I think it was very well-received, very well-sustained, let’s say, and so, yeah, I think these are the ones that come to mind, and maybe just to advocate with my civil society head, I would say not only private sector and government should work together, but also including, I think, civil society organization can definitely bring an added value, I mean, there’s one aspect which is something that I try to highlight in the panel, so documenting basic data, So basically the impact that ultimately ransomware is having, as we said also and as remarked by the previous panelists, as a societal threat, not just a technical one or just not just a business related one. But it can also be a sort of like, in a way, supporting sort of like thinking outside the box. And civil society has a sort of like unique capacity to propose tests and proposals. So ethical frameworks supporting victim-centered response protocols reinforces the need for due diligence in digital infrastructure. So coupling it with, for example, what Brendan was mentioning before in terms of like the collaboration with law enforcement and the collaboration with the private sector. I think that also civil society can definitely play a role.


Giacomo Paoli Persi: Thank you. I’m conscious of the time. We have just over two minutes before we need to wrap up this discussion. So I would like to do one final round to all our speakers and give you the chance, starting with you, Ambassador, to 30 seconds. What is the one key takeaway you would like the audience here in the room and online to bring back after this session?


Brendan Dowling: I think working together on this issue, there’s private sector responses to build resilience as threat intelligence sharing. Microsoft helped us as we tracked down the perpetrator behind the Medibank attack. This affects every nation at all levels. We’ve been too slow to come together and act collectively against it. There’s no one lever. We need to pull all levers at once. So talking about this as a national policy issue in all your countries is crucial.


Giacomo Paoli Persi: Thank you. Coming to you, Julia, online, your key takeaway.


Julie Rodríguez Acosta: Thank you, Giacomo, and thank you so much to all the panelists. This has been really enriching. As you said, we are at the UN. and Francesca Pellicchino. They are working with the United States and other states to establish a permanent mechanism. These are critical opportunities for all states regarding their size and capacity. They can share insights on ransomware. We can design some mechanisms within the US to advance international cooperation and assist in finding ransomware together.


Francesca Bosca: I was reflecting on one of the first panels where I’m super happy to hear very concrete initiatives and good examples. My aspiration for the panel is to go out and say collaboration needs to be meaningful, not just tokenized. Collaboration is not just a buzzword that we need to have there, but it needs to make an impact. Allow me to say one thing that I forgot to mention before, which I think is important and we didn’t have time to dig into. Definitely the aspect of capacity building. Don’t take for granted that we are all on the same page. We got some very good examples from different areas of the world. We need to build an inclusive capacity building work stream across the different sectors and across the different geographies.


Giacomo Paoli Persi: Thank you. Chelsea?


Chelsea Smethurst: Finally, to close this off, I’d like to see how capacity building and skilling changes to tackle this problem. I think it’ll be an exciting next few years as we see creative uses of artificial intelligence tools to really counter ransomware. I don’t think that’s just going to sit in the hands of big tech providers. Looking forward to seeing how our countermeasures against ransomware will evolve. Thank you.


Giacomo Paoli Persi: Thank you very much. With that, all is left to do is to thank our speakers for sharing their experience and knowledge with us. Thank all of you in the audience, here in person and online. for participating and again thanks to Microsoft for bringing us together to discuss this very interesting topic. Thank you very much, thanks.


G

Giacomo Paoli Persi

Speech speed

149 words per minute

Speech length

2644 words

Speech time

1057 seconds

Ransomware attacks have grown by nearly 300% in the last year, becoming the most prominent global cybersecurity threat

Explanation

Giacomo presents ransomware as an urgent global challenge that has seen dramatic growth despite being a known threat. He argues that this growth is due to technological factors making campaigns more sophisticated and the emergence of commercial off-the-shelf ransomware tools that have lowered barriers for cybercriminals.


Evidence

Nearly 300% growth in ransomware attacks last year; emergence of cybercrime-as-a-service model; increased sophistication and difficulty to detect


Major discussion point

Evolution and Scale of Ransomware Threats


Topics

Cybersecurity


Agreed with

– Chelsea Smethurst
– Brendan Dowling

Agreed on

Ransomware has dramatically increased and represents a global threat requiring urgent action


C

Chelsea Smethurst

Speech speed

171 words per minute

Speech length

1066 words

Speech time

373 seconds

Microsoft tracks over 600 million cyber attacks daily, with ransomware showing a 275% increase in usage over 12 months

Explanation

Chelsea provides specific data from Microsoft’s Digital Defence Report showing the massive scale of cyber attacks they monitor daily. She notes that while ransomware usage increased dramatically, they’ve also seen a 300% decrease in attacks reaching the encryption phase, indicating improved defensive capabilities.


Evidence

600 million cyber attacks daily tracked by Microsoft; 275% increase in ransomware usage; 300% decrease in attacks reaching encryption phase; over 90% of successful ransomware attacks target unmanaged devices


Major discussion point

Evolution and Scale of Ransomware Threats


Topics

Cybersecurity


Agreed with

– Brendan Dowling
– Nedalcho Mihay
– Francesca Bosca

Agreed on

Vulnerable sectors and populations are increasingly targeted by ransomware


Safe havens where ransomware groups operate with impunity, primarily in Russia, enable continued criminal activity

Explanation

Chelsea identifies safe havens as geographic entities where ransomware operators can base their attacks against international victims without being held accountable at legal and international levels. She emphasizes this as a critical factor enabling the growth of ransomware and calls for more international cooperation to address these safe haven opportunities.


Evidence

Geographic entities where ransomware operators face no legal accountability; difficulty for industry to target and minimize safe haven opportunities


Major discussion point

Enabling Factors and Criminal Ecosystem


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Brendan Dowling
– Nedalcho Mihay

Agreed on

Safe havens and jurisdictional challenges enable ransomware operations


Microsoft’s pilot program with Europol integrates private sector expertise with government investigatory powers

Explanation

Chelsea describes a novel public-private partnership model where Microsoft integrates digital crimes investigators into Europol’s European cybercrime center. This approach combines private sector front-line expertise with government legal and investigatory capabilities to create more powerful tools against ransomware.


Evidence

Pilot program announced earlier this month integrating Microsoft investigators into Europol’s European cybercrime center in The Hague


Major discussion point

Public-Private Collaboration Models


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Brendan Dowling
– Francesca Bosca
– Julie Rodríguez Acosta

Agreed on

Multi-stakeholder collaboration is essential for effective ransomware response


Disagreed with

– Francesca Bosca
– Brendan Dowling

Disagreed on

Role of civil society in ransomware response


Artificial intelligence tools show promise for evolving countermeasures against ransomware attacks

Explanation

Chelsea expresses optimism about the future use of AI tools to counter ransomware, suggesting that these capabilities won’t be limited to big tech providers. She sees this as an exciting development for the next few years in the evolution of ransomware countermeasures.


Major discussion point

Capacity Building and Future Directions


Topics

Cybersecurity


N

Nedalcho Mihay

Speech speed

160 words per minute

Speech length

680 words

Speech time

253 seconds

Analysis of 2,717 ransomware incidents shows over half targeted US organizations, with healthcare being the most affected sector

Explanation

Nedalcho presents findings from the Cyber Peace Institute’s research analyzing global ransomware incidents. The data reveals geographic and sectoral patterns in ransomware targeting, with the US being the primary target and healthcare being the most vulnerable sector.


Evidence

Analysis of 2,717 incidents conducted by 184 threat actors against organizations in 22 sectors across 90 countries; more than half targeted US organizations; more than a third targeted healthcare sector, followed by non-profits and ICT; top three most active threat actors: LockBit, BlackCat, and Rebel


Major discussion point

Evolution and Scale of Ransomware Threats


Topics

Cybersecurity


Agreed with

– Brendan Dowling
– Chelsea Smethurst
– Francesca Bosca

Agreed on

Vulnerable sectors and populations are increasingly targeted by ransomware


52% of analyzed threat actors remain unattributed, while 54% of attributed actors are linked to Russia

Explanation

Nedalcho’s research reveals significant challenges in attribution, with over half of ransomware threat actors remaining geographically unattributed. Among those that can be attributed, Russia emerges as the primary source, followed by Iran and China.


Evidence

Analysis of around 300 threat actors; 52% remain unattributed; of attributed actors: 54% linked to Russia, 8% to Iran, 7% to China


Major discussion point

Attribution and Geographic Distribution


Topics

Cybersecurity


Agreed with

– Chelsea Smethurst
– Brendan Dowling

Agreed on

Safe havens and jurisdictional challenges enable ransomware operations


Open source research reveals geographic patterns and infrastructure connections among ransomware threat actors

Explanation

Nedalcho describes the methodology used to map ransomware threat actors using open source intelligence, including personnel, linguistic patterns, and technical infrastructure indicators. The research creates databases of both threat actor profiles and global ransomware incidents to support evidence-based multilateral action.


Evidence

Use of open source intelligence including personnel, linguistic patterns, technical infrastructure; creation of threat actor profiles database and global incidents database; graph analysis tools for data visualization


Major discussion point

Attribution and Geographic Distribution


Topics

Cybersecurity


B

Brendan Dowling

Speech speed

144 words per minute

Speech length

2173 words

Speech time

899 seconds

Ransomware-as-a-service model has democratized cybercrime by lowering barriers to entry and allowing non-technical criminals to conduct attacks

Explanation

Brendan explains how the ransomware ecosystem has evolved into a sophisticated service industry where criminals no longer need technical expertise. He describes a system with brokers, liaisons, and specialists who handle different aspects of attacks, with ransomware groups typically taking only 20% of profits, making it an accessible and industrialized crime type.


Evidence

Service industry with liaison persons/brokers connecting attackers; specialists for data fencing and each operation element; ransomware groups take only 20% of profits; democratized, industrialized, and ubiquitous nature


Major discussion point

Enabling Factors and Criminal Ecosystem


Topics

Cybersecurity


Agreed with

– Giacomo Paoli Persi
– Chelsea Smethurst

Agreed on

Ransomware has dramatically increased and represents a global threat requiring urgent action


Cryptocurrency enables long-range ransomware attacks and makes financial tracking more difficult for law enforcement

Explanation

Brendan identifies cryptocurrency as a fundamental enabler of the ransomware crime type, arguing that this financial innovation has made it much more difficult to track and disrupt ransomware operations. He emphasizes that ransomware as we know it didn’t exist before cryptocurrency.


Evidence

Ransomware is a crime type that didn’t exist before cryptocurrency; cryptocurrency enabled long-range launching of attacks across the globe


Major discussion point

Enabling Factors and Criminal Ecosystem


Topics

Cybersecurity | Economic


Ransomware attacks have consequences that ripple through society and require whole-of-nation responses, not just cybersecurity solutions

Explanation

Brendan argues that ransomware should be viewed as a national security issue rather than just a technical cybersecurity problem. He emphasizes that the externalities of attacks are often borne by communities and governments, not just the targeted businesses, requiring a broader societal response.


Evidence

Medibank attack affected 10 million Australians; women and families facing domestic violence had to be moved to safe houses after health data was compromised; externalities borne by community and government, not just targeted business


Major discussion point

Ransomware as National Security Threat


Topics

Cybersecurity


Agreed with

– Julie Rodríguez Acosta

Agreed on

Ransomware is a national security issue, not just a technical cybersecurity problem


Attacks on small Pacific Island nations like Tonga’s National Health Information Service show the global reach and societal impact of ransomware

Explanation

Brendan provides a compelling example of how ransomware groups target even the most remote and vulnerable populations. He describes how an attack on Tonga’s health system forced hospitals to use paper and pen, with nurses struggling to process patients, demonstrating the real-world human impact of these attacks.


Evidence

Pacific Island countries with populations under 100,000 being targeted by Russian cybercrime groups; Tonga’s National Health Information Service shut down; hospitals using paper and pen; nurses struggling to triage patients; Australia deployed assistance team


Major discussion point

Ransomware as National Security Threat


Topics

Cybersecurity


Agreed with

– Chelsea Smethurst
– Nedalcho Mihay
– Francesca Bosca

Agreed on

Vulnerable sectors and populations are increasingly targeted by ransomware


Australia applies financial sanctions, travel restrictions, and conducts active disruption of ransomware infrastructure

Explanation

Brendan outlines Australia’s multi-faceted approach to combating ransomware, including diplomatic engagement, sanctions, law enforcement cooperation, and active disruption measures. He acknowledges the limitations of these approaches but emphasizes the need for comprehensive responses when dealing with safe havens.


Evidence

Financial and travel sanctions against cybercrime actors; engagement with Russian government on attacks from their territory; fried servers of Medibank attack perpetrators; support for Budapest Cybercrime Convention; Counter Ransomware Initiative participation


Major discussion point

Response Mechanisms and Accountability


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Chelsea Smethurst
– Nedalcho Mihay

Agreed on

Safe havens and jurisdictional challenges enable ransomware operations


Disagreed with

– Francesca Bosca
– Chelsea Smethurst

Disagreed on

Role of civil society in ransomware response


Successful collaboration requires normalizing government engagement and creating safe spaces for information sharing without regulatory consequences

Explanation

Brendan describes Australia’s approach to encouraging private sector cooperation by creating an environment where engaging with government is normalized and expected. He outlines specific measures including legislation protecting shared information from regulatory use and mandatory ransomware payment reporting.


Evidence

Public engagement during major cyber incidents with government teams deployed to company headquarters; legislation protecting information shared with cyber security centre from regulatory use; mandatory ransomware payment reporting scheme; normalization of collective response


Major discussion point

Public-Private Collaboration Models


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Chelsea Smethurst
– Francesca Bosca
– Julie Rodríguez Acosta

Agreed on

Multi-stakeholder collaboration is essential for effective ransomware response


J

Julie Rodríguez Acosta

Speech speed

121 words per minute

Speech length

1512 words

Speech time

746 seconds

The Costa Rica government attack demonstrated how ransomware can affect states’ ability to deliver essential services and maintain governance

Explanation

Julie describes the Costa Rica attack as a wake-up call that showed how ransomware can target entire government infrastructures, not just individual institutions. She emphasizes that this attack disrupted essential public services, compromised citizens’ personal data, and undermined public trust in the state’s ability to secure digital systems.


Evidence

Costa Rica government infrastructure crippled by ransomware; disruption of essential public services; compromise of citizens’ personal data; undermining of public trust in state’s digital security capabilities


Major discussion point

Ransomware as National Security Threat


Topics

Cybersecurity


Agreed with

– Brendan Dowling

Agreed on

Ransomware is a national security issue, not just a technical cybersecurity problem


The UN framework for responsible state behavior includes norms about preventing malicious actors from operating with impunity

Explanation

Julie explains the existing UN framework that outlines expectations for state behavior in cyberspace, including protection of critical infrastructure. However, she notes a gap between the framework’s principles and the reality on the ground, where attacks continue to target the very infrastructure meant to be protected.


Evidence

UN framework with voluntary non-binding norms and international law applicability; principle that critical infrastructure must be protected and respected; gap between framework and reality with continued attacks on protected infrastructure


Major discussion point

Response Mechanisms and Accountability


Topics

Cybersecurity | Legal and regulatory


Small nations like El Salvador leverage international cooperation through UN, OAS, and bilateral partnerships to combat ransomware

Explanation

Julie provides a Global South perspective on ransomware response, explaining how smaller nations must rely heavily on international cooperation and multilateral frameworks. She describes El Salvador’s multi-stakeholder approach including national laws, regional cooperation, and leveraging international capabilities.


Evidence

El Salvador’s enactment of cybersecurity and data protection laws; cooperation through UN and OAS; bilateral cooperation to learn from more advanced capabilities; recognition that small organizations globally are affected with massive consequences


Major discussion point

Public-Private Collaboration Models


Topics

Cybersecurity | Legal and regulatory | Development


Agreed with

– Brendan Dowling
– Chelsea Smethurst
– Francesca Bosca

Agreed on

Multi-stakeholder collaboration is essential for effective ransomware response


The UN’s future permanent mechanism offers opportunities for states to advance international cooperation on ransomware

Explanation

Julie highlights the transition from the current UN Open-Ended Working Group to a future permanent mechanism as a critical opportunity for all states to share insights on ransomware. She emphasizes that this new mechanism can help design cooperative mechanisms and advance international collaboration regardless of state size or capacity.


Evidence

Transition from current Open-Ended Working Group to permanent mechanism; opportunities for all states regardless of size and capacity to share insights; potential to design mechanisms for advancing international cooperation


Major discussion point

Capacity Building and Future Directions


Topics

Cybersecurity | Legal and regulatory


F

Francesca Bosca

Speech speed

146 words per minute

Speech length

1606 words

Speech time

657 seconds

Ransomware has evolved from opportunistic attacks to strategically targeting critical infrastructure with high sensitivity to disruption

Explanation

Francesca explains how ransomware groups have shifted their tactics from widespread opportunistic attacks to more strategic targeting of critical infrastructure sectors like healthcare and education. She notes that these sectors have limited cybersecurity resilience but high sensitivity to disruption, making them attractive targets despite potentially lower individual payouts.


Evidence

Shift from opportunistic attacks against many individuals to strategic targeting of critical infrastructure; focus on healthcare, education, and civil society with limited cybersecurity resilience; criminal profits hit record high with over $1 billion paid by victims in 2023 through cryptocurrency


Major discussion point

Evolution and Scale of Ransomware Threats


Topics

Cybersecurity


Agreed with

– Brendan Dowling
– Chelsea Smethurst
– Nedalcho Mihay

Agreed on

Vulnerable sectors and populations are increasingly targeted by ransomware


Initial access broker markets specialize in selling access to compromised networks, facilitating ransomware deployment

Explanation

Francesca describes a specialized criminal ecosystem where initial access brokers obtain and sell access to compromised networks of high-value organizations. This creates a form of cyber-organized crime with specialized professionals providing ransomware operators with the access they need to carry out attacks.


Evidence

Thriving initial access broker markets; brokers specializing in obtaining and selling access to compromised networks of high-value organizations; cyber-organized crime form with specialized professionals


Major discussion point

Enabling Factors and Criminal Ecosystem


Topics

Cybersecurity


Multi-stakeholder efforts including civil society organizations can provide victim-centered responses and ethical frameworks

Explanation

Francesca advocates for including civil society organizations in ransomware response efforts, arguing they bring unique value through documenting societal impacts and proposing ethical frameworks. She emphasizes that civil society can support victim-centered response protocols and reinforce due diligence requirements for digital infrastructure.


Evidence

Ransomware Task Force as successful multi-stakeholder effort; civil society’s unique capacity to document societal impacts; ability to propose ethical frameworks and victim-centered response protocols; support for due diligence in digital infrastructure


Major discussion point

Public-Private Collaboration Models


Topics

Cybersecurity


Agreed with

– Brendan Dowling
– Chelsea Smethurst
– Julie Rodríguez Acosta

Agreed on

Multi-stakeholder collaboration is essential for effective ransomware response


Disagreed with

– Brendan Dowling
– Chelsea Smethurst

Disagreed on

Role of civil society in ransomware response


Inclusive capacity building across different sectors and geographies is essential for meaningful collaboration

Explanation

Francesca emphasizes that collaboration on ransomware must be meaningful rather than tokenistic, requiring genuine capacity building efforts that don’t assume all stakeholders are at the same level. She stresses the need for inclusive approaches that span different sectors and geographic regions.


Evidence

Examples from different areas of the world showing varying capacity levels; need for inclusive capacity building work streams across sectors and geographies


Major discussion point

Capacity Building and Future Directions


Topics

Cybersecurity | Development


V

Vilda

Speech speed

149 words per minute

Speech length

171 words

Speech time

68 seconds

Ransomware is unique among crimes as the only type where the private sector dominates both crime prevention and incident response

Explanation

Vilda argues that ransomware represents a distinctive criminal phenomenon where, unlike other types of crime, the private sector takes the lead role in both preventing attacks and handling their aftermath. This creates an unusual dynamic in the traditional government-private sector relationship for crime response.


Evidence

Academic research showing ransomware as the only crime type where private sector dominates prevention and incident handling


Major discussion point

Public-Private Collaboration Models


Topics

Cybersecurity | Legal and regulatory


Agreements

Agreement points

Ransomware has dramatically increased and represents a global threat requiring urgent action

Speakers

– Giacomo Paoli Persi
– Chelsea Smethurst
– Brendan Dowling

Arguments

Ransomware attacks have grown by nearly 300% in the last year, becoming the most prominent global cybersecurity threat


Microsoft tracks over 600 million cyber attacks daily, with ransomware showing a 275% increase in usage over 12 months


Ransomware-as-a-service model has democratized cybercrime by lowering barriers to entry and allowing non-technical criminals to conduct attacks


Summary

All speakers agree that ransomware has seen unprecedented growth (275-300% increases) and has evolved into the most prominent global cybersecurity threat, requiring immediate and comprehensive responses.


Topics

Cybersecurity


Ransomware is a national security issue, not just a technical cybersecurity problem

Speakers

– Brendan Dowling
– Julie Rodríguez Acosta

Arguments

Ransomware attacks have consequences that ripple through society and require whole-of-nation responses, not just cybersecurity solutions


The Costa Rica government attack demonstrated how ransomware can affect states’ ability to deliver essential services and maintain governance


Summary

Both speakers emphasize that ransomware transcends technical cybersecurity issues and represents a fundamental threat to national security, governance, and essential service delivery.


Topics

Cybersecurity


Safe havens and jurisdictional challenges enable ransomware operations

Speakers

– Chelsea Smethurst
– Brendan Dowling
– Nedalcho Mihay

Arguments

Safe havens where ransomware groups operate with impunity, primarily in Russia, enable continued criminal activity


Australia applies financial sanctions, travel restrictions, and conducts active disruption of ransomware infrastructure


52% of analyzed threat actors remain unattributed, while 54% of attributed actors are linked to Russia


Summary

Speakers agree that safe havens, particularly in Russia, represent a critical enabling factor for ransomware operations, with attribution challenges complicating response efforts.


Topics

Cybersecurity | Legal and regulatory


Multi-stakeholder collaboration is essential for effective ransomware response

Speakers

– Brendan Dowling
– Chelsea Smethurst
– Francesca Bosca
– Julie Rodríguez Acosta

Arguments

Successful collaboration requires normalizing government engagement and creating safe spaces for information sharing without regulatory consequences


Microsoft’s pilot program with Europol integrates private sector expertise with government investigatory powers


Multi-stakeholder efforts including civil society organizations can provide victim-centered responses and ethical frameworks


Small nations like El Salvador leverage international cooperation through UN, OAS, and bilateral partnerships to combat ransomware


Summary

All speakers emphasize the critical need for collaboration across government, private sector, and civil society, with various models being tested and implemented globally.


Topics

Cybersecurity | Legal and regulatory


Vulnerable sectors and populations are increasingly targeted by ransomware

Speakers

– Brendan Dowling
– Chelsea Smethurst
– Nedalcho Mihay
– Francesca Bosca

Arguments

Attacks on small Pacific Island nations like Tonga’s National Health Information Service show the global reach and societal impact of ransomware


Microsoft tracks over 600 million cyber attacks daily, with ransomware showing a 275% increase in usage over 12 months


Analysis of 2,717 ransomware incidents shows over half targeted US organizations, with healthcare being the most affected sector


Ransomware has evolved from opportunistic attacks to strategically targeting critical infrastructure with high sensitivity to disruption


Summary

Speakers agree that ransomware groups are increasingly targeting vulnerable populations and critical infrastructure, particularly healthcare and small nations with limited defensive capabilities.


Topics

Cybersecurity


Similar viewpoints

Both speakers identify cryptocurrency and safe havens as fundamental enabling factors for ransomware operations, with cryptocurrency making financial tracking difficult and safe havens providing operational security for criminals.

Speakers

– Brendan Dowling
– Chelsea Smethurst

Arguments

Cryptocurrency enables long-range ransomware attacks and makes financial tracking more difficult for law enforcement


Safe havens where ransomware groups operate with impunity, primarily in Russia, enable continued criminal activity


Topics

Cybersecurity | Economic


Both speakers emphasize the importance of inclusive, multi-stakeholder approaches that consider the needs of vulnerable populations and smaller nations, advocating for capacity building and international cooperation.

Speakers

– Francesca Bosca
– Julie Rodríguez Acosta

Arguments

Multi-stakeholder efforts including civil society organizations can provide victim-centered responses and ethical frameworks


Small nations like El Salvador leverage international cooperation through UN, OAS, and bilateral partnerships to combat ransomware


Topics

Cybersecurity | Development


Both speakers focus on future-oriented solutions, emphasizing the potential of emerging technologies and the need for comprehensive capacity building to address ransomware challenges.

Speakers

– Chelsea Smethurst
– Francesca Bosca

Arguments

Artificial intelligence tools show promise for evolving countermeasures against ransomware attacks


Inclusive capacity building across different sectors and geographies is essential for meaningful collaboration


Topics

Cybersecurity | Development


Unexpected consensus

The role of civil society in ransomware response

Speakers

– Francesca Bosca
– Brendan Dowling
– Julie Rodríguez Acosta

Arguments

Multi-stakeholder efforts including civil society organizations can provide victim-centered responses and ethical frameworks


Successful collaboration requires normalizing government engagement and creating safe spaces for information sharing without regulatory consequences


Small nations like El Salvador leverage international cooperation through UN, OAS, and bilateral partnerships to combat ransomware


Explanation

Unexpectedly, speakers from government, civil society, and international organizations all agreed on the critical role of civil society in ransomware response, which is unusual given that cybersecurity is often viewed as primarily a government-private sector issue.


Topics

Cybersecurity | Legal and regulatory


The need for active disruption measures beyond traditional law enforcement

Speakers

– Brendan Dowling
– Chelsea Smethurst

Arguments

Australia applies financial sanctions, travel restrictions, and conducts active disruption of ransomware infrastructure


Microsoft’s pilot program with Europol integrates private sector expertise with government investigatory powers


Explanation

There was unexpected consensus between government and private sector representatives on the need for active disruption measures, including ‘frying servers’ and novel integration models, which represents a more aggressive approach than traditional cybersecurity responses.


Topics

Cybersecurity | Legal and regulatory


Overall assessment

Summary

The speakers demonstrated remarkably high consensus across all major aspects of ransomware challenges and responses. Key areas of agreement included the dramatic scale of the threat, its evolution from technical to national security issue, the critical role of safe havens and cryptocurrency as enablers, the need for multi-stakeholder collaboration, and the targeting of vulnerable populations and critical infrastructure.


Consensus level

Very high consensus with no significant disagreements identified. This strong alignment suggests a mature understanding of the ransomware threat landscape and broad agreement on response strategies. The implications are positive for policy development and international cooperation, as stakeholders from government, private sector, civil society, and international organizations share common threat assessments and response frameworks. This consensus provides a solid foundation for coordinated action and suggests that the main challenge is implementation rather than agreement on the nature of the problem or general response approaches.


Differences

Different viewpoints

Role of civil society in ransomware response

Speakers

– Francesca Bosca
– Brendan Dowling
– Chelsea Smethurst

Arguments

Multi-stakeholder efforts including civil society organizations can provide victim-centered responses and ethical frameworks


Australia applies financial sanctions, travel restrictions, and conducts active disruption of ransomware infrastructure


Microsoft’s pilot program with Europol integrates private sector expertise with government investigatory powers


Summary

Francesca advocates for including civil society as a third pillar alongside government and private sector, emphasizing victim-centered approaches and ethical frameworks. However, other speakers focus primarily on government-private sector partnerships without explicitly including civil society organizations in their collaboration models.


Topics

Cybersecurity | Legal and regulatory


Unexpected differences

Emphasis on capacity building vs. enforcement

Speakers

– Francesca Bosca
– Brendan Dowling

Arguments

Inclusive capacity building across different sectors and geographies is essential for meaningful collaboration


Australia applies financial sanctions, travel restrictions, and conducts active disruption of ransomware infrastructure


Explanation

While both speakers acknowledge the global nature of the ransomware threat, Francesca emphasizes the need for inclusive capacity building and not assuming all stakeholders are at the same level, while Brendan focuses more on enforcement measures and active disruption. This represents an unexpected philosophical difference between capacity-building versus enforcement-first approaches.


Topics

Cybersecurity | Development


Overall assessment

Summary

The speakers showed remarkable consensus on the nature and scale of the ransomware threat, with disagreements primarily focused on implementation approaches rather than fundamental issues. Main areas of difference included the role of civil society in response efforts, preferred models for public-private collaboration, and emphasis between capacity building versus enforcement measures.


Disagreement level

Low to moderate disagreement level. The speakers demonstrated strong alignment on threat assessment and the need for collaborative responses, with differences mainly in tactical approaches and stakeholder inclusion. This suggests a mature policy discussion where the fundamental challenges are well understood, but implementation strategies are still evolving. The implications are positive for policy development, as the shared understanding of core issues provides a solid foundation for developing comprehensive responses that could incorporate multiple approaches.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers identify cryptocurrency and safe havens as fundamental enabling factors for ransomware operations, with cryptocurrency making financial tracking difficult and safe havens providing operational security for criminals.

Speakers

– Brendan Dowling
– Chelsea Smethurst

Arguments

Cryptocurrency enables long-range ransomware attacks and makes financial tracking more difficult for law enforcement


Safe havens where ransomware groups operate with impunity, primarily in Russia, enable continued criminal activity


Topics

Cybersecurity | Economic


Both speakers emphasize the importance of inclusive, multi-stakeholder approaches that consider the needs of vulnerable populations and smaller nations, advocating for capacity building and international cooperation.

Speakers

– Francesca Bosca
– Julie Rodríguez Acosta

Arguments

Multi-stakeholder efforts including civil society organizations can provide victim-centered responses and ethical frameworks


Small nations like El Salvador leverage international cooperation through UN, OAS, and bilateral partnerships to combat ransomware


Topics

Cybersecurity | Development


Both speakers focus on future-oriented solutions, emphasizing the potential of emerging technologies and the need for comprehensive capacity building to address ransomware challenges.

Speakers

– Chelsea Smethurst
– Francesca Bosca

Arguments

Artificial intelligence tools show promise for evolving countermeasures against ransomware attacks


Inclusive capacity building across different sectors and geographies is essential for meaningful collaboration


Topics

Cybersecurity | Development


Takeaways

Key takeaways

Ransomware has evolved from a cybersecurity issue to a national security threat requiring whole-of-society responses, with attacks growing 275-300% in the past year


The ransomware-as-a-service model has democratized cybercrime by lowering barriers to entry and enabling non-technical criminals to conduct sophisticated attacks


Cryptocurrency and safe haven jurisdictions (primarily Russia) are key enablers that make ransomware profitable and difficult to prosecute


Critical infrastructure and vulnerable populations (healthcare, small nations, NGOs) are increasingly targeted due to their high sensitivity to disruption and limited cybersecurity resources


Successful counter-ransomware efforts require coordinated multi-stakeholder collaboration between governments, private sector, and civil society organizations


Attribution remains challenging with 52% of threat actors unattributed, though 54% of attributed actors are linked to Russia


Public-private partnerships must normalize government engagement and create safe information-sharing environments without regulatory consequences


International cooperation through mechanisms like the UN framework, Budapest Convention, and Counter Ransomware Initiative is essential but implementation remains insufficient


Resolutions and action items

Australia’s mandatory ransomware payment reporting scheme to improve collective response and information sharing


Microsoft’s pilot program with Europol to integrate private sector expertise with government investigatory powers


Cyber Peace Institute’s two-phase project to map ransomware threat actors globally and evaluate state compliance with UN cyber norms


El Salvador’s advocacy for stronger UN language addressing ransomware and support for establishing permanent mechanisms for international cooperation


Australia’s deployment of assistance teams to help Tonga recover from ransomware attacks on their National Health Information Service


Unresolved issues

How to effectively address safe haven jurisdictions where ransomware groups operate with impunity, particularly Russia’s non-cooperation


Developing scalable models for public-private collaboration that can be replicated across different countries and sectors


Creating inclusive capacity building programs across different geographies and sectors to address varying levels of cybersecurity readiness


Establishing effective mechanisms for cryptocurrency tracking and regulation to disrupt ransomware financial flows


Determining the role of blockchain technology in mitigating cyber threats, with limited current research available


Addressing the challenge that over 90% of successful ransomware attacks target unmanaged devices in under-resourced organizations


Developing victim-centered response protocols and ethical frameworks for ransomware incidents


Suggested compromises

Creating safe spaces for private sector engagement with government where shared information will not be used for regulatory purposes


Balancing mandatory reporting requirements with incentives for voluntary cooperation and information sharing


Leveraging international organizations and bilateral partnerships to help smaller nations access cybersecurity capabilities they cannot develop independently


Integrating civil society organizations into public-private partnerships to provide victim-centered perspectives and ethical frameworks


Using the UN’s future permanent mechanism to focus on practical implementation of existing norms rather than creating new commitments


Thought provoking comments

What we’re seeing at the moment in Pacific Islands, some countries with populations fewer than 100,000 people are being targeted by cybercrime groups operating out of Russia. Last week, the National Health Information Service in Tonga was shut down by a ransomware attack… At the moment, in hospitals in Tonga, people are using paper and pen to deliver healthcare to their people.

Speaker

Brendan Dowling


Reason

This comment powerfully reframes ransomware from a technical cybersecurity issue to a humanitarian crisis affecting the most vulnerable populations. It demonstrates the global reach and indiscriminate nature of ransomware attacks, challenging assumptions about who gets targeted.


Impact

This vivid example set the tone for the entire discussion, establishing ransomware as a human-centered issue rather than just a technical problem. It influenced subsequent speakers to adopt similar human-impact framing and contributed to the panel’s emphasis on ransomware as a national security threat.


For anyone who thinks ransomware is a technical issue, out of that incident, we saw women and families facing domestic violence from partners who weren’t aware of the health treatment that their spouse or their mother or their sister had been seeking, and had to be moved to safe houses to escape violent partners or former partners. These are not cyber issues. These are not technical issues. These are whole of nation security and safety issues.

Speaker

Brendan Dowling


Reason

This comment fundamentally challenges how ransomware is categorized and understood, revealing unexpected cascading social consequences that extend far beyond the immediate cyber incident. It demonstrates how data breaches can trigger real-world violence and endanger lives.


Impact

This observation became a central theme throughout the discussion, with multiple speakers subsequently emphasizing the societal and national security dimensions of ransomware. It helped shift the conversation from technical solutions to whole-of-society responses.


This crime type didn’t exist before cryptocurrency. Cryptocurrency enabled the long-range launching of ransomware attacks across the globe.

Speaker

Brendan Dowling


Reason

This insight identifies cryptocurrency as the fundamental enabler that transformed ransomware from a localized nuisance into a global threat. It provides a clear causal link between financial innovation and criminal evolution.


Impact

This observation was picked up by multiple subsequent speakers who elaborated on cryptocurrency’s role in ransomware operations. It helped frame the discussion around the intersection of financial technology and cybercrime, influencing later conversations about blockchain and financial tracking.


We also see cases where ransomware has been used, not primarily for financial gain, but as a vector to conduct denial-of-service attacks that affect the availability of system and national space… This attack disrupted essential public services and compromised the confidentiality of citizens’ personal data… it undermines public trust in the state’s ability to secure a digital system.

Speaker

Julie Rodríguez Acosta


Reason

This comment introduces a crucial distinction between financially-motivated ransomware and state-linked operations with broader strategic objectives. It highlights how ransomware can be weaponized to undermine governmental legitimacy and public trust.


Impact

This insight elevated the discussion to the level of international relations and state security, influencing the moderator’s subsequent questions about state accountability and the role of multilateral institutions in addressing ransomware threats.


So it’s as far as I can tell the only type of crime or cyber crimes the only type of crime where the private sector is dominating both on crime prevention but also handling the incident and dealing with the aftermath

Speaker

Vilda (audience member)


Reason

This observation from a criminologist provides a unique analytical framework that distinguishes ransomware from all other crime types. It highlights an unprecedented shift in crime response dynamics where traditional government roles have been largely assumed by private entities.


Impact

This comment prompted detailed responses from government representatives about public-private cooperation models and sparked discussion about the need to normalize government engagement in ransomware incidents. It helped frame the final portion of the discussion around collaborative response models.


We have this service industry where you can talk to a liaison person or a broker who will connect you with the person who will conduct the initial attack on a system… So it is now an accessible crime type. And for most ransomware groups, they will just take 20% of the profit from the attack that you conduct. So it’s become democratised, industrialised, and it is ubiquitous.

Speaker

Brendan Dowling


Reason

This comment reveals the sophisticated business model behind modern ransomware operations, showing how it has evolved from individual hacking to an organized criminal industry with specialized roles and profit-sharing structures.


Impact

This insight was reinforced by other speakers who discussed ‘ransomware as a service’ and influenced the discussion about why ransomware attacks have increased so dramatically. It helped explain the scalability and accessibility that drives the current ransomware epidemic.


Overall assessment

These key comments fundamentally transformed what could have been a technical cybersecurity discussion into a comprehensive examination of ransomware as a multifaceted global crisis. Dowling’s vivid examples of human impact in Tonga and Australia established an emotional and humanitarian foundation that influenced all subsequent speakers to frame their contributions in terms of real-world consequences rather than abstract technical challenges. The identification of cryptocurrency as the foundational enabler provided a clear analytical framework that other speakers built upon. The distinction between criminal and state-linked ransomware operations elevated the discussion to matters of international security and diplomacy. Finally, the criminologist’s observation about the unique public-private dynamics in ransomware response opened up crucial questions about governance and collaboration models. Together, these insights created a rich, multi-dimensional conversation that addressed technical, social, economic, political, and humanitarian aspects of the ransomware threat, demonstrating how individual thought-provoking observations can elevate and redirect an entire policy discussion.


Follow-up questions

How is it possible that despite the fact that we all know what ransomware is, it’s still having such a devastating impact on cyber security?

Speaker

Giacomo Paoli Persi


Explanation

This fundamental question about the persistence of ransomware despite awareness was posed at the beginning but not fully answered, requiring deeper investigation into the gap between knowledge and effective countermeasures


Is there any research on how blockchain deployment is correlated to the mitigation of cyber threats? If no, how do we promote this research topic? And if yes, what is the outcome?

Speaker

Online participant (via Michael Karamean)


Explanation

This question about blockchain’s potential role in cybersecurity mitigation was only partially addressed, with speakers acknowledging their knowledge was outdated and suggesting need for current research


How to use blockchain for ransomware resistance and incident attribution

Speaker

Francesca Bosca


Explanation

Francesca expressed specific interest in exploring blockchain applications for ransomware defense and attribution, indicating this as a promising research direction


How you can integrate blockchain with AI for automated threat detection

Speaker

Francesca Bosca


Explanation

This represents an emerging area combining two technologies that could enhance cybersecurity capabilities but requires further investigation


Research into exploitable and exploited infrastructure used for multiple criminal activities beyond ransomware

Speaker

Francesca Bosca


Explanation

Phase two of the Cyber Peace Institute’s research will investigate how the same infrastructure is used for various crimes, which is currently under-researched


What are the opportunities (weaknesses of the system) that criminals exploit, beyond just the means they use

Speaker

Giacomo Paoli Persi


Explanation

The moderator noted there’s insufficient focus on the ‘opportunities’ aspect of criminal behavior in ransomware, suggesting need for more research on systemic vulnerabilities


How creative uses of artificial intelligence tools will evolve to counter ransomware in the coming years

Speaker

Chelsea Smethurst


Explanation

Chelsea expressed interest in seeing how AI countermeasures against ransomware will develop, indicating this as an important area for ongoing research and development


Development of model laws or legislation for countries that need regulatory frameworks to intervene against ransomware

Speaker

Giacomo Paoli Persi


Explanation

The moderator suggested this as a potential area for UN work, noting that some states may lack legal frameworks to take action against ransomware operations in their territory


How to build inclusive capacity building work streams across different sectors and geographies

Speaker

Francesca Bosca


Explanation

Francesca emphasized the need for comprehensive capacity building research and implementation, noting that not all stakeholders are at the same level of understanding or capability


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #45 Advancing Cyber Resilience of Critical Infrastructure

Open Forum #45 Advancing Cyber Resilience of Critical Infrastructure

Session at a glance

Summary

This open forum discussion focused on advancing cyber resilience of critical infrastructure in an increasingly connected world where malicious actors frequently target essential services. The panel brought together diplomatic and technical experts to explore how different communities can collaborate more effectively to strengthen cybersecurity defenses.


Pavel Mraz from UNIDIR outlined the alarming threat landscape, noting that nearly 40% of state-sponsored cyber operations in 2024 targeted critical infrastructure, with ransomware attacks surging by 275% and global cybercrime losses exceeding $10 trillion. Timea Suto from the private sector emphasized that companies face diverse threat actors including state-sponsored groups, criminal organizations, and insider threats, but stressed that even well-funded private entities cannot combat these challenges alone without government support and public-private partnerships.


Floreta Faber shared Albania’s experience with major cyber attacks in 2022, highlighting key lessons about cybersecurity being a mindset issue requiring involvement from all organizational levels, not just technical teams. She described significant reforms including expanding their cybersecurity authority from 20 to 85 people and increasing critical infrastructure designations by 50%. Caroline Troein from ITU discussed capacity building efforts, particularly the importance of national CERTs and cyber exercises that simulate real-world attacks to foster cross-sectoral coordination and build trust between stakeholders.


Lars Erik Smevold provided the energy sector perspective, emphasizing that resilience requires understanding cyber-physical systems and conducting regular drills with operational staff. He stressed the importance of cross-border cooperation, particularly in interconnected electricity grids. The discussion concluded with calls for bridging diplomatic and technical communities through practical cooperation frameworks, shared exercises, and inclusive capacity building that translates international norms into real-world protection measures.


Keypoints

## Major Discussion Points:


– **Current Cyber Threat Landscape for Critical Infrastructure**: The discussion revealed alarming statistics, with nearly 40% of state-sponsored cyber operations in 2024 targeting critical infrastructure sectors like energy, healthcare, finance, water, and telecommunications. Ransomware attacks surged by 275%, and global cybercrime losses exceeded $10 trillion, making cybercrime equivalent to the world’s third-largest economy if measured by GDP.


– **Multi-Stakeholder Collaboration and Breaking Down Silos**: A central theme emphasized the critical need to bridge gaps between diplomatic and technical communities, strengthen public-private partnerships, and foster cross-sectoral cooperation. Panelists stressed that no single actor—whether government, private sector, or international organization—can secure critical infrastructure alone.


– **National Experiences and Lessons Learned**: Albania’s experience with major cyber attacks in 2022 provided concrete insights into building resilience, including the importance of expanding from purely technical approaches to comprehensive capacity building, increasing staff from 20 to 85 people, and implementing new legal frameworks based on the NIS2 directive.


– **Capacity Building and Practical Implementation**: The discussion highlighted the vital role of cyber drills, tabletop exercises, and training programs in building national resilience. These practical tools help translate international frameworks and norms into real-world protection while fostering trust and coordination between different stakeholders who may not have previously interacted.


– **Policy Frameworks and International Cooperation**: Panelists explored how UN frameworks for responsible state behavior in cyberspace can be operationalized through practical measures like point-of-contact directories, crisis communication protocols, and regional cooperation models, while emphasizing the need for “smarter policy, not more regulation.”


## Overall Purpose:


The discussion aimed to explore how to advance cyber resilience of critical infrastructure through enhanced cooperation between diplomatic and technical communities, sharing of best practices and lessons learned, and development of practical frameworks for protecting essential services that underpin modern society.


## Overall Tone:


The discussion maintained a professional yet urgent tone throughout, beginning with sobering statistics about the threat landscape but evolving into a more constructive and solution-oriented conversation. While acknowledging the serious challenges and complexities involved, panelists remained optimistic about progress being made and emphasized practical, collaborative approaches. The tone was notably inclusive and emphasized mutual learning, with speakers from different sectors and regions sharing experiences openly and building on each other’s insights.


Speakers

– **Marie Humeau**: Moderator of the session


– **Floreta Faber**: Deputy Director General Envoy for Cyber Diplomacy, Director of International Project Coordinator and Strategic Development of Cybersecurity at the National Cyber Security Authority of Albania


– **Lars Erik Smevold**: Security and Processor Control Architect, R&D IT and ICS at Stratgraft (energy sector)


– **Pavel Mraz**: Cybersecurity researcher at UNIDIR (UN Institute for Disarmament Research)


– **Caroline Troein**: Cybersecurity Division at the ITU (International Telecommunication Union)


– **Ms. Timea Suto**: Global Director Policy Lead (private sector perspective on critical infrastructure protection)


– **Mr. Akhil Thomas**: Strategy and Operation Manager at the Global Forum for Cyber Expertise (session summarizer)


– **Participant**: Works for an IT company owned by the Church of Norway (identified as Eirik)


**Additional speakers:**


– **Gautam Kaila**: Chief Executive Officer of the Global Cyber Forum (mentioned in introduction but did not speak during the recorded portion)


Full session report

# Advancing Cyber Resilience of Critical Infrastructure: A Multi-Stakeholder Forum Discussion


## Executive Summary


This comprehensive open forum discussion brought together diplomatic and technical experts to address the urgent challenge of strengthening cyber resilience for critical infrastructure in an increasingly interconnected world. The session, moderated by Marie Humeau, featured perspectives from international organisations, national governments, the private sector, and technical specialists, all united by the recognition that malicious actors are increasingly targeting essential services that underpin modern society.


The discussion revealed a concerning threat landscape whilst highlighting promising avenues for enhanced cooperation. Through detailed presentations and interactive dialogue, participants explored how different communities can collaborate more effectively to strengthen cybersecurity defences, moving beyond traditional silos to create comprehensive protection frameworks for critical infrastructure.


## Current Threat Landscape and Scale of Challenge


Pavel Mraz from UNIDIR opened the discussion by presenting statistics that established the gravity of the current cybersecurity environment. He reported that nearly 40% of state-sponsored cyber operations in 2024 specifically targeted critical infrastructure sectors, including energy, healthcare, finance, water, and telecommunications. This targeting represents a significant shift in the threat landscape, with essential services becoming primary objectives rather than collateral targets.


Mraz highlighted the economic scale of cybercrime, noting that global cybercrime losses have reached substantial levels, with ransomware attacks showing significant increases. He emphasised the evolution of attack methodologies, particularly the rise of supply chain attacks that leverage a “target one, compromise many” principle, allowing threat actors to reach multiple downstream customers through a single successful breach.


The UNIDIR researcher also introduced the UN framework for responsible state behaviour in cyberspace, particularly norm F, which prohibits attacks on critical infrastructure. He stressed the importance of translating these frameworks into practical measures through national legislation and institutional coordination.


## Private Sector Challenges and Investment Needs


Timea Suto, representing the private sector perspective, outlined the diverse threat landscape confronting private entities, which includes state-nexus actors, organised cybercriminal ecosystems, and insider threats from employees or contractors. She detailed significant investments being made by private sector organisations, including implementation of zero-trust architectures, vulnerability management programmes, supply chain security assessments, and incident response plans.


However, Suto emphasised a critical limitation: “Even well-funded private entities cannot deter state-sponsored actors or dismantle global criminal networks alone.” This led to her call for a fundamental shift in policy approaches, advocating for “smarter policy focused on incentives rather than more regulation, with rebalanced responsibility between private and public sectors.”


Suto stressed the importance of inclusive policymaking processes that give all stakeholders a meaningful voice in developing critical infrastructure protection frameworks. She argued that governments should take a more active role in disrupting threat actors whilst allowing private companies to focus on operational security and innovation.


## National Experience: Albania’s Response to Cyber Attacks


Floreta Faber, Albania’s Deputy Director General Envoy for Cyber Diplomacy, shared Albania’s experience with major cyber attacks in 2022 that targeted the country’s e-government services. Her most significant insight was reframing cybersecurity from a purely technical challenge to a comprehensive organisational issue.


“We understood that talking about cyber security it’s not talking about technology, it’s talking about a mindset, it’s talking involving more people from the top management to the simple employee inside every organisation that cyber security is something everyone needs to focus on,” Faber explained.


Following the attacks, Albania dramatically expanded its cybersecurity authority from 20 to 85 people and expanded its critical infrastructure designations by 50%. The country also implemented new legislation based on the NIS2 directive and established regular cyber drills to build understanding between stakeholders.


Faber described Albania’s long-term approach to building regional cooperation through youth engagement, establishing cyber camps for young people in the region. “We believe those are things which take time. And sometimes they prevent you not talking to each other for different trust reasons, which are not only cyber security,” she noted, acknowledging that technical cooperation cannot be separated from broader geopolitical contexts.


## International Capacity Building Efforts


Caroline Troein from the International Telecommunication Union provided insights into global capacity building efforts, noting that “many of the issues that developing countries are facing are ones that developed countries are facing. Are you being agile? Do you have the right people in the right places? Are the stakeholders actually coordinating?”


Troein reported that ITU receives requests from multiple countries for cybersecurity support, including CERT establishment, strategy development, and specialised training programmes. She emphasised the critical role of national Computer Emergency Response Teams (CERTs) as the first line of defence, noting they require legal mandates, operational structures, sustainable funding, and continuous training to be effective.


The ITU representative highlighted the importance of cyber exercises that simulate real-world attacks and test response mechanisms. She noted that whilst countries now have more cybersecurity measures than ever before, challenges persist in coordination and implementation, suggesting that the bottleneck is not necessarily in individual components but in how these elements work together as integrated systems.


## Energy Sector Operational Realities


Lars Erik Smevold, representing the energy sector as a Security and Processor Control Architect, provided insights into the operational realities of protecting critical infrastructure. He defined resilience as the ability to “anticipate, prepare for, respond to, recover from, and learn from disruptions.”


Smevold emphasised the unique challenges of cyber-physical systems, where cybersecurity measures implemented on one system can affect other interconnected systems. This is particularly relevant in the energy sector, where cross-border electricity grid connections require coordinated responses between Nordic and European transmission system operators.


He stressed the importance of involving operational staff in cybersecurity preparations and noted that technical specialists need better understanding of different critical infrastructure sectors. Smevold also contributed to discussions on bridging technical and diplomatic communities, suggesting that these communities need informal arenas to meet and build understanding of each other’s work and resource needs.


## Building Trust and Communication Networks


A recurring theme throughout the discussion was the critical importance of pre-established relationships and communication channels. Mraz introduced a compelling metaphor: “You cannot exchange business cards in a hurricane when a real cyber crisis hits, and you need assistance from abroad… You need to have all these channels, the trust, and the network already in place to know where to reach out.”


Faber described practical approaches to building regional cooperation through informal communication channels, including regular information sharing platforms. She emphasised that trust-building requires long-term investment and can be developed through professional networks that persist beyond specific projects or initiatives.


The discussion revealed that effective cooperation requires both formal structures and informal mechanisms. Whilst official frameworks and protocols are necessary, the human relationships and mutual understanding that enable effective cooperation often develop through informal interactions and shared experiences.


## Information Sharing Challenges


A significant challenge addressed was sharing sensitive cybersecurity information across borders. A participant from an IT company asked: “How can we make arrangements for sharing sensitive technical data across borders without making it public, while still allowing technical people to defend their systems better?”


This question highlighted a fundamental tension in cybersecurity cooperation: the need to share threat intelligence to enable collective defence whilst maintaining operational security. Faber responded by describing Albania’s approach to building regional cooperation through informal communication channels, representing practical mechanisms that technical professionals can use to build relationships and share information.


The discussion emphasised that information sharing requires sustained engagement and trust-building through professional networks and alumni connections that persist over time.


## Bridging Technical and Diplomatic Communities


Multiple speakers recognised that effective cybersecurity requires both technical expertise and diplomatic coordination, yet these communities often operate separately. Faber described Albania’s approach of bringing experienced diplomats into technical organisations, creating important translation capabilities between communities.


Smevold reinforced this theme by suggesting informal meeting opportunities and cross-visits between technical facilities and diplomatic offices. The discussion revealed that bridging these communities requires both formal structures and informal mechanisms, with understanding built through direct exposure to each other’s working environments and challenges.


## Areas of Consensus and Practical Recommendations


Despite the complexity of the challenges discussed, participants demonstrated strong consensus on several key points. There was universal agreement that multi-stakeholder collaboration is essential, with no single actor capable of addressing cyber threats alone. All participants agreed on the importance of capacity building and training that goes beyond technical skills to include awareness at all organisational levels.


The discussion generated several concrete recommendations:


– Countries should designate points of contact for crisis communication and establish pre-crisis trust networks


– Technical and diplomatic communities need more informal meeting opportunities to build mutual understanding


– Development of secure channels for sharing sensitive threat information across borders between technical professionals


– Strengthening regional cooperation through platforms like CERT-to-CERT information sharing


– Investment in long-term trust-building initiatives, including youth engagement programmes


– Translation of UN cyber norms into practical national frameworks with clear legal mandates and operational structures


## Ongoing Challenges


Several significant challenges remain unresolved. The question of how to effectively share sensitive technical threat information across borders whilst maintaining security represents a fundamental operational challenge. Balancing regulatory requirements with operational flexibility for private sector critical infrastructure operators remains an area where different stakeholders advocate for different approaches based on their experiences.


The fragmentation of critical infrastructure definitions and frameworks across different countries creates coordination challenges that may require improved mapping and translation between different national approaches. Additionally, scaling cybersecurity capacity building to meet global needs represents a resource challenge that may require innovative approaches to knowledge transfer and peer-to-peer learning.


## Conclusion


This comprehensive discussion demonstrated both the complexity of protecting critical infrastructure in the digital age and the potential for enhanced cooperation across traditional boundaries. The participants’ emphasis on cybersecurity as fundamentally a human and organisational challenge, rather than merely a technical one, represents a mature understanding that has significant implications for policy and practice.


The discussion’s focus on practical cooperation mechanisms—from informal communication channels to structured exercises and cross-community engagement—offers concrete pathways for translating high-level commitments into operational improvements. The emphasis on trust-building as a long-term strategic investment provides a foundation for sustainable cybersecurity cooperation.


Whilst significant challenges remain, particularly around information sharing mechanisms and regulatory approaches, the level of consensus achieved on fundamental principles provides a strong foundation for continued progress. The participants’ recognition that no single actor can secure critical infrastructure alone, combined with their practical suggestions for enhanced cooperation, offers pathways for more resilient and collaborative approaches to protecting the essential services upon which modern society depends.


Session transcript

Marie Humeau: Thank you and welcome to our open forum. We want to discuss with you how to advance cyber resilience of critical infrastructure. In an ever more connected world, not only people are more connected, but also the critical infrastructure we rely on. The resilience of critical infrastructure that are increasingly targets of malicious actors is key. A robust cyber resilience measures is therefore vital. In an environment where incidents could have overspilling effect on international peace and security, there are risk of escalation. We need to look at overcoming the silos between diplomatic and technical communities, strengthening national and cross-border CERT to CERT cooperation and fostering multi-stakeholder engagement. The idea of this discussion came from the observation that different communities have an important role to play, but that they need to be offered more opportunities to share expertise and knowledge. To get better informed and to get a greater understanding of what each community is doing and how we can support one another in our work to build a resilient cyberspace. We will explore all of this with our distinguished panel today. My name is Marie Meaux and I will be your moderator today. I’m happy to introduce you to our cross-community panel. On my right is Floreta Faber. She is Deputy Director General Envoy for Cyber Diplomacy, Director of International Project Coordinator and Strategic Development of Cybersecurity at the National Cyber Security Authority of Albania. On my left is Lars-Erik Smethel, Security and Processor Control Architect, R&D IT and ICS at Stratgraft. Online I also have three panelists, Mr. Pavel Mraz, who works as a cybersecurity researcher at UNIDIR, Caroline Trine, Cybersecurity Division at the ITU, and Timea Souto, our Global Director Policy Lead. and Mr. Gautam Kaila, the Chief Executive Officer of the Global Cyber Forum. To facilitate my work and our reporting, I will also ask Akhil Thomas, Strategy and Operation Manager at the Global Forum for Cyber Expertise, to summarize the discussion in a few words at the end of the session. We also will look at your active participation, so please prepare some questions for the Q&A session. Because it’s a very rich issue, I’m going to stop talking now and I’m going to ask the question to my panelists. And I will start with really looking at the threat landscape and the national experience on how to really build an efficient critical infrastructure protection. And for this, I will start with asking Pavel Mraz online the first question. What does today’s global cyber threat landscape look like for critical infrastructure and where are the biggest vulnerabilities emerging?


Pavel Mraz: Marie, thank you for the floor and good morning to Oslo to everyone and also good day to those connecting online. To your question, the UN Institute of Disarmament Research will have a research report coming out summarizing the main threats of 2024 in cyberspace. And let me give you a few highlights, specifically focusing on critical infrastructure. When it comes down to critical infrastructure, the cyber threat landscape in 2024 has grown increasingly complex. It became clear that critical infrastructure remains both an attractive target for financially motivated actors, and also a strategic target for some state affiliated actors. In 2024, alarmingly, nearly 40% of all documented cyber operations by states have focused on critical infrastructure, including targeting sectors such as energy, healthcare, finance, water, and telecommunications. And of course, these sectors are foundational. and Mr. Sajjan Dharma. As a result, we have seen last year a surge in ransomware attacks by 275%. And global financial losses from cybercrime disruptions exceeded US$10 trillion last year. To put it in other words, if cybercrime was a country measured by GDP, it would have, it would be the third world’s largest economy. We, of course, also see attacks on digital supply chains. These are becoming more prominent. And leveraging the principle, target one, compromise many, malicious cyber actors now increasingly use supply chain attacks to target downstream customers, including critical infrastructure operators. Importantly, internet infrastructure, which includes satellites, undersea cables, and data centers are also increasingly vulnerable and targeted by cyber attacks. And these type of threats raise concerns about widespread interruption of critical digital services, particularly in times of heightened geopolitical tensions. Even the UN system itself and humanitarian operations are not exempt from cyber attacks. According to the UN latest reporting, over 50% of cyber threats targeting the UN in 2024 came from advanced precision. and Mr. Steven Cooley. I’m excited to be here today. I’m here to talk about cyber attacks. Cyber attacks are a common threat to many countries. They are often associated with a number of existing threat actors, which include states. And these attacks have disrupted critical aid operations and endangered vulnerable populations. Taken together, these trends show that cyber attacks are becoming a question of when for many organizations, not a question of if. And no sector or state can contain cyber risks alone. And as infrastructure becomes more digital, interconnected, securing these types of infrastructure will require both a multi-level and a multi-functional approach. And this is a challenge that we are facing. And I’m excited to be here today to talk about cyber attacks. I’m here to talk about cyber attacks. I’m here to talk about cyber attacks. And this is a challenge that we are facing. And I’m excited to be here today to talk about cyber attacks. And as infrastructure becomes more digital, interconnected, securing these types of infrastructure will require both a multi-level, multi-stakeholder cooperation, but also resilience planning and preparing for when cyber attacks hit. Positively, the UN member states have acknowledged these risks, with states calling for greater protection of critical infrastructure. Particularly those that deliver essential services across borders. And also, states have called for reinforcing an international taboo against targeting these types of systems. But of course, a number of states also indicated that they will be protecting these types of systems. But of course, a number of states also indicated that they will be protecting these types of systems. And this is a challenge that we are facing. And I’m excited to be here today to talk about cyber attacks on our city. But obviously, states have called for reinforcing an international taboo against targeting these types of systems. But also, states have called for reinforcing an international taboo against targeting these types of systems. And of course, a number of states also indicated that they will be protecting these types of systems. But obviously, states have called for reinforcing an international taboo against targeting these types of systems. But also, states have called for reinforcing an international taboo against targeting these types of systems. But of course, a number of states also indicated that they will be protecting these types of systems. But obviously, states have called for reinforcing an international taboo against targeting these types of systems. But also, states have called for reinforcing an international taboo against targeting these types of systems. And that will require strong cross sectoral, and cross border cooperation, and also practical tools. And that will require strong cross sectoral, and cross border cooperation, and also practical tools. Including adopting national frameworks, using cyber drills and stepping up capacity building to translate shared global principles into real world protection on the ground Including adopting national frameworks, using cybersecurity drills and stepping up capacity building to translate shared global principles into real world protection on the ground Including adopting national frameworks, using cybersecurity drills and stepping up capacity building to translate shared global principles into real world protection on the ground Including adopting national frameworks, using cybersecurity drills and stepping up capacity building to translate shared global principles into real world protection on the ground Including adopting national frameworks, using cyberdrills and stepping up capacity building to translate shared global principles into real world protection on the ground I need to talk about these in more detail later on. But I will leave it at that for now and over back to you Marie. I need to talk about these in more path now and over back to you Marie.


Marie Humeau: Thank you very much Pavel and thank you for this very clear scene setter. I think now that we have looked at the threats and the more just scary things, I think we’ll also look at the resiliency and how to strengthen really our cyberspace. But Timea first. Maybe on your side Timea, from a private sector lens. who are the main threat actors targeting critical infrastructure, and how is the industry adapting? But also, what is needed is to strengthen the resilience of the private sector. So, Timea, over to you.


Ms. Timea Suto: Thanks very much, Marie, and I’d just like to preface that everything I say here today it’s written in much more detail in a report that ICC has published at the IGF last year on the protection of critical infrastructure and their supply chains, and that’s available in English, Spanish, and Chinese, as well as Arabic. So, if you want to hear more about what I try to cram into my short interventions, please take a look at the report, and I’ll put the link in the chat later on. To answer your question, Marie, from a private sector perspective, the threat landscape facing critical infrastructure has never been more serious or diverse. We are seeing a broad range of actors, each with their distinct motivations and capabilities that target essential services that underpin our economies and societies. On one end of the spectrum, we have the state nexus threat actors, often referred to as advanced persistent threats, or APTs. These actors are often supported by governments, military, or intelligence institutions, and they are typically well-funded, highly skilled, and capable of executing long-term complex operations. And their objectives vary from disrupting services and accessing sensitive information to advancing geopolitical interests or undermining public trust in institutions, and they can target both public and private sector entities. At the same time, the private sector must contend with increasingly organized cyber-criminal ecosystems. These criminal groups are often globally distributed and structured in ways that make them resilient to takedowns and prosecution, while also ransomware as a service has made it possible for even relatively unsophisticated attackers to cause major disruptions. Thirdly, there are insider threats that are also a significant concern. These are individuals, whether militia, We have a small group of people that are very ambitious or simply negligent, that could be employees or third party contractors to critical infrastructure services, who often have privileged access and fewer securities checks. And even a small mistake on their part or intentional sabotage can have a big cascading real-world consequences. What makes all of these threats more dangerous is the interconnected nature of our infrastructure systems, right. A compromise in one sector, say electricity, can ripple into others like healthcare, telecommunications or transportation. And these aren’t just IT risks, these are national and global security concerns. Cyberattacks on critical infrastructure can lead to service outages, physical destruction or even endanger lives. And it’s not just about keeping these systems aligned, it’s about making sure that these attacks don’t compromise the confidentiality and integrity of data that can lead to long-lasting consequences like identity theft or misinformation, which can cause havoc long after the incident has been dealt with, right. So how is the private sector responding to this? It is actually stepping up, making significant investments in cybersecurity resilience. We are seeing growing adoption of zero-trust architectures, continuous patching and vulnerability management, strong data backups, supply chain risk assessments. Companies are building robust incident response plans and embedding cybersecurity by design into their systems. So there’s a lot that the private sector does, but it is critical to be clear-eyed about the limits of what the private sector can actually do by its own. Even the best-funded private entities cannot deter state-sponsored actors or take down global criminal networks on their own. Cybersecurity, especially in the context of critical infrastructure, is a shared responsibility between government and industry. So to strengthen the resilience, I think there are four things that are critical. First, governments must play a more active role in disrupting threat actors, enforcing laws and creating accountability in cyberspace. This includes strengthening national capabilities, supporting law enforcement collaboration across borders, and fully implementing the existing international norms and frameworks of responsible state behavior in cyberspace. Secondly, we need more stronger and operational public-private partnerships, not just during the crises themselves, but in the ongoing governance and design of security measures. This includes real-time threat intelligence sharing, joint exercises, collaborative development of standards and guidelines, and many more. Third, we need to invest in capacity building and resilience, especially in sectors or regions where cybersecurity maturity is still developing. And last but not least, we need to strike the right balance between regulatory obligations and the sustainability of security controls. Regulations should be clear, risk-based, and consistent across borders. At the same time, voluntary standards and flexible frameworks can allow companies to adapt quickly to emerging threats and invest in the most effective protections. So to conclude, protecting critical infrastructure requires continuous investment, cooperation, and innovation. The private sector is deeply committed to strengthening its defenses and ensuring business continuity, but without decisive government action and deep ongoing collaboration, we will not be able to keep pace with the evolving threat environment that Pavel has been talking about earlier. Thanks, Marie.


Marie Humeau: Thank you, Timéa. I think you point out the importance of what we have to do together, and that no one can achieve anything on their own, and that really the stakeholder needs to work together. So now I will go to Floretta. Unfortunately, Albania has suffered recent cyber attacks. So can you maybe share some lessons? Because that’s how also it works, is sharing best practices, lesson learned. And how to be more resilient? Can you also give us some idea on how, in that time, the diplomatic and the technical community collaborated during the response? So Floretta, the floor is yours.


Floreta Faber: Thank you very much. This is a great opportunity to be here in this very honored panel and speak about the case of Albania. Yes, it is true in mid-2022 we had a big cyber attack on the e-gov services and Albania is a government which has today over 1,200 e-services to the Albanian citizens. Over 95% of all our services to citizens are online so hitting that system was really something which was was aiming to disrupt our work to the citizens, to disrupt their trust to the government and it was a long and and very important process for us because we were fighting corruption, we were bringing more efficiency to citizens and we were really focused on on doing our best but then this was kind of a wake-up call for us because as we focused so much on on having a technological advancement on responding to cyber security in 2022 when we did have a law on cyber security according to the NIS 1 directive by then we did have an authority on cyber security and we thought we had it covered. We understood that talking about cyber security it’s not talking about technology, it’s talking about a mindset, it’s talking involving more people from the top management to the simple employee inside every organization that cyber security is something everyone needs to focus on. The investment need to be in technology but capacity building is also important for training people and also people who are not technical have the right mindset and awareness that even one mistake in one person inside a big organization can allow the that a simple attack become a big incident on cyber security. So these were the main lessons on 2022. We made big changes in the country, really big reforms legally on making a new law on cyber security on 2024 according to the NIS2 directive. As we are talking for the critical and important infrastructures, this week actually we’re expecting the government to approve the new list of critical and important infrastructure, which we build according to the new procedures, a new methodology according to the NIS2 directive. And a big change, it has been not only working with all the critical infrastructures and their technical employees, but also going beyond that and looking at the procedures, looking at how people are trained, looking at every employee inside organizations, public or private sector, to really have a focus on why they need to be focused and understanding that on cyber security and the cyber attacks, it’s not simply a password which needs to be more secure. So it’s people who need to look at every email, at every message they get, to make sure that the links that they’re opening, they are safe and they can continue their business or private life really in a secure manner. They have been big changes inside the authority, we had about 20 people, now we’re going to 85 people inside the authority. The list of critical infrastructures is increased by 50% with the new methodology. We work really on a daily basis with all the critical and important infrastructures, with the big state-sponsored cyber attack of 22 was not one and alone. it has continuously, we’re being continuously under those attacks. The last one practically happened last week, which was really a severe attack on the Tirana municipality. And our technical teams are like the big changes that in 2022, it was difficult to have a group, a good group of experts to work on the case. But in cases like today, in over one and a half years now, we have only the team that goes from the authority on cybersecurity, working closely with the team cybersecurity teams inside the organizations in trying, first of all, what’s important to bring back the services and also go back and do the reverse engineering, find out what happened, where the attack came from. And this is where the important part is, what do we do with the attribution? When we find out at the end where the attack came from, which is not, at least in the last cases, it has happened. We have had about over 80 attempts last year and 32 attacks because became incidents. And we dealt with all the cases successfully. But what we fear as in every country, I believe, is that if the attacks are severe, if the attacks go more than in one infrastructures, how our capacities are to respond to those and then how we we work with the diplomatic community actually to deal with the cases. Now, I’ve been part of many UN and a number of UN open-ended working group, which gave us a good understanding how countries in the world actually act or react in case of big cyber attacks and in case of incidents. There is a system where every country can do that. Maybe some countries need to be more active, but at least from the Albanian side, in the last over a year now, every Friday, we send all the information that we can make public and share with the other CERTs. And those are practices which we need to enforce also with the diplomatic community. Different regions in the world have different experiences, like in Asia or Baltic countries So we all come with our own difficulties sometimes in talking to each other when it comes to political level or diplomatic level. And then is, of course, very important the technical side. So first, we need to make everyone aware that all those groups need to communicate with each other in all the kind of preparation time that we do in order to be able to protect ourselves, but also know how to communicate when there is a cyber incident. First, because we want to share what happened, be able to share what happened, be able to protect other critical infrastructures on the same field or on the same category. As we know, the cyber attacks can go cross-border sometimes very easily. So it can happen to us, but it can happen to, unfortunately, to every other country. So we need to be prepared and have… very, very clear how we communicate in cases of cyber attacks. So, through UN or through OSCE or different regions of the world on different type of groups, we have agreed on confidential building measures where protecting critical and important infrastructures is really one of the key pillars on which we always look at. So, maybe I’ll stop here and if you have more questions, I’ll come back.


Marie Humeau: Thank you very much, Floreta. I think you already point out certain of the points we will come back to at a later stage on the cooperation and the framework and the way ahead. But before we jump into this, I still have two speakers for the first part. So, you mentioned the need for political commitment, for clarity. You mentioned the growing number of critical infrastructure and actually the need to invest in tech and capacity building. So, talking about capacity building, I will now give the floor to Caroline because the ITU does a lot of capacity building with national certs. So, maybe you can explain to us how that works and how does the role of the cross-sectoral cooperation works, the importance of having some simulation exercise, for example. And also, maybe you can tell us a bit about the kind of requests that the ITU receive and how actually you address those requests to efficiently protect critical infrastructure. Caroline, the floor is yours.


Caroline Troein: Thank you, Marie. I’d like to start actually on a positive note because we’ve heard a lot about the increasing challenges that the countries are facing. But according to the ITU’s Global Cybersecurity Index, countries now actually have more cybersecurity measures in place than ever before. So, that means that there are more laws, more technical capabilities, more strategies, more trainings, more cooperation. Great. The challenge, and echoing what Marie said, and others have said is that now countries really need to think about how do I enhance my maturity, sharpen my responsiveness, adapt to the new challenges that, for example, AI brings, and even maybe prepare for things like what would a quantum future look like. As Marie mentioned, we work on, in part, on national certs, and we really see them as foundational to cyber resilience because they serve as that first line of defense against ICT threats targeting critical infrastructure in particular. Now as countries evolve, they may develop like a cybersecurity agency, but the core of responsibilities for incident response is still with that cert. Going to the point made earlier, cybercapacity building should not be just a technical thing. While certs are key and are that front line, they need to have a legal mandate, they need to have clear operational structures, they need to have sustainable funding. All of these form part of what makes a successful cert. And they also need to have that continuous training and the ability to adapt to what comes next. And that’s where things like cyber drills, which are the cyber exercises IT does, can be a really vital tool because they aim to simulate real world attacks, test national response mechanisms, and then foster cross-sectoral coordinations. Ideally also they help bridge the gap between that technical audience and non-technical communities, which is a big challenge in protecting critical infrastructure. I want to bring in an example here. I was recently in a country where we ran some exercises specifically focused around critical information infrastructure, so a subset there. And for this, we had some trainings, what they should be aware of in terms of their national regulations that were relatively new, understanding what the roles of the different actors were. were in the different dependencies that existed. And it was interesting to see the shift in mentality that started to happen with many of the participants who, firstly, while it was a relatively small country, most of the stakeholders there had not interacted before and had not interacted around these topics particularly. The mentality shifts then started to build trust because they saw how they had connections to each other, how they could help each other, and how they could move from a tick-in-the-box exercise that the regulator might have been putting in place to thinking proactively about what can they build as methods and pathways for sharing information. Like Feretta was saying, how do you actually share that information in a timely way? What structures do we need in place? What are the vulnerabilities that we haven’t, that may be uncomfortable to talk about? Only when you have trust can you actually begin to talk about those limitations. And, of course, these kinds of exercises can bring a bit of a renewed energy as everybody then is on the same page. They see an alignment to move forward. Now, this is just one of the types of interventions that we do. We receive a lot of requests from member states, especially now we have a list. I think it’s the latest count is 46 countries that have requested some sort of support from IT in terms of cybersecurity. We work with them in terms of establishing or enhancing a national cert, developing or updating national cybersecurity strategies. We do quite a few different tailored trainings around topics from everything to try to bolster the number of women in cybersecurity, to topics around child online protection, critical infrastructure, of course. We also try to do a lot of train the trainer programs because our ultimate goal is to build local capacity. We’re not that big of a UN agency and our team is small within that. And I I think one of the things that we very much recognize, and the reason I like working with a lot of the people in this room is there’s a mutual recognition of you have to work together, but you also have to make sure that the country itself that you’re helping is empowered to start on their own journey. They need to be owning the process going forward. It won’t be ITU doing the cybersecurity of a country, it will be the country doing it. And we need to then look at things, what we do, how can we actually then make sure that we’re developing practices for the country that can build that trust between stakeholders as trust is particularly vulnerable when there are political or economic challenges. And with this, I do wanna take a side note to just say, this is not a developing developed country issue. Many of the issues that developing countries are facing are ones that developed countries are facing. Are you being agile? Do you have the right people in the right places? Are the stakeholders actually coordinating? And for least developed countries, suddenly they had the extra added issue and small island developing states I’d like to add to this, in that they lack the human capacity, let alone the technical tools. So as countries are facing these competing priorities, exercises can be a useful way to help identify where the areas for prioritization lie, where they can work more effectively together and where they should go next. Thanks.


Marie Humeau: You mentioned bridging the gap between the tech audience and the non-technical audience. So I’m going to move to my technical person on the panel. So Lars, you are actually kind of trying to bridge this gap also between the tech inside the company and the non-technical people, the operational. So, which is really crucial, but from your perspective in the energy sector, what does resilience look like in practice and how is it evolving? And based on your experience, what concrete action and processes help strengthening cyber resilience?


Lars Erik Smevold: Thank you. for having me on this panel, Marie, so I appreciate that a lot. What strikes me in these discussions that we are sitting here is that availability, that is definitely part of the front of our heads, because we are running a critical infrastructure like hydropower plants, solar, wind, batteries, grid stabilizers, everything that keeps electricity grids in different countries around the globe up and running. And for us, the resilience part is kind of like the, we need to introduce the ability to anticipate, prepare for, respond to, recover from, and learn from disruptions that happens. And to make these happen, we need to have the people in the sharp end. They need to get a better understanding, together with the operations and also the managers and the policy makers, at least, to actually, how can we make these operationalized? The processes are very good, the policies are good, but we need to adapt and keep in mind that security and cybersecurity, we are actually adapting into cyber physical systems. And these cyber physical systems, they need to be taken good care of. And it’s not like we can put any type of security measures into any type of system, because that system will affect another type of system that can get consequences and impacts you maybe don’t want to have. So you need to build a better understanding of what you actually try to achieve. So for us, it’s like the resiliency part is, for our site, is a lot of physical, what kind of spare parts do we have stored in case of emergency? Wind and weather, we are… Highly educated and trained to handle We work to handle a lot of the cyber security part attacks and understanding From our part we have actually done drills the last couple of years Directly to our power stations and the people outside there and they are loved that we actually came down to them talk to them Make us understand how they day by day work and life is and Also how that will affect them and their family if a cyber attack happens And one thing is the cyber attack in itself But if that is combined with other type of physical attacks at the same time, how do we? handle that and How do we together with the national security authorities? The regulators for our sector How do we work together to actually achieve? our end goal to actually keep the availability of these The critical infrastructure that we actually are working on So for our parties also At the same time we also need to adapt to the the climate changes that we already have felt And work close together with these the other authorities both in the Norwegian countries And the Nordics because the electricity grids Both in the Nordics and Europe. We are highly connected and We need to build that understanding Also from experience back in 2015-2016 the Nordic TSOs transmission system operators that are responsible for the highways in the electricity grids in each and every country Actually did drills Together to actually see what affected us together with the national security The National Regulators, and also with the different CERT teams in these countries. And what we actually achieved from that type of exercise was actually the better understanding what is needed of knowledge, and not only for the cyber security and IT, but you also need a good understanding from each and every type of, from electricity, from telecoms, from water and water sewage, and other critical infrastructure that are in this mixture, to actually do the right decisions at the right time. So from my perspective, it’s definitely the go together, collaborate, and then make the people in the sharp end able to do their work and get a better understanding. So I think that’s good for now.


Marie Humeau: Thank you very much, Lars. So the time is flying fast, because we have a lot to say. And I would like to jump, actually, based on your point on the importance of talking, working together, cross-sectoral, cross-regional, between the authorities, at national level, at regional level, you mentioned as well, Floretta, I would like to look at cooperation frameworks and path ahead. So for this, I will give you a bit of a shorter time, so we can also have a bit of time for a question. But I will start with you, Timea, online. So you mentioned the challenges of the private sector to protect critical infrastructure. What do support you would need from policymakers? And also, why do you think the business should care about discussion that are happening at the international level, in international fora, such as the UN? And please keep it short, so we can have time for questions from the audience. Thanks.


Ms. Timea Suto: Thanks, Marie. I’ll try to be brief. Really, for business protecting critical infrastructure today, It is increasingly difficult and not because of a lack of willingness, but because of the complexity and fragmentation that surrounds this. So we have challenges like many of the essential services we rely on today not being originally conceived as critical, so not designed to operate with the resilience and security that we now require. At the same time, these infrastructures are highly interdependent, not just with each other, but with suppliers, contractors, and digital service providers who might not themselves be classified as critical. Then we have a huge issue with fragmentation, not a shared global understanding of what constitutes critical infrastructure, with definitions and legal frameworks differing widely between countries, and in some cases missing altogether. And then there’s the question of maturity of critical infrastructure operators that vary enormously from those companies that can’t have the resources to invest in advanced security measures to those, especially SMEs, who lack the tools, funding, and expertise, but they are just as critical in the supply chains. So how do we ensure security for essential services without overburdening the companies that we actually rely on to operate and innovate them? I won’t talk about what the private sector could do. Please read the report that I posted in the chat. We say a lot about that. But I focus on the policy makers, as that’s what you asked about, Marie. And there, I have a very short answer. It’s not more regulation, but smarter policy. Focus less on control and more on creating the right incentives for cybersecurity investment. There’s also a need to rebalance responsibility between the private and public sectors. Governments must recognize that security for socially critical infrastructure is not solely a private burden, particularly when that infrastructure is necessary for public well-being, national security, and economic stability. Instead of defaulting to new regulatory obligations, we need public investment, fiscal support, and policy environments that enable this. So there’s one line that I’d like to leave you with today is this, if we want effective cybersecurity outcomes, we need inclusive policymaking processes. I hope I was brief enough, Marie.


Marie Humeau: Thank you. I think you point out to all the complexity and challenges. I guess there are also some challenges within the technical community. So Lars, maybe you can tell us a bit more about how the technical community cooperate together. and how the cross at cross sectoral level as well, and also international level. But also, from your perspective, should the technical community engage more with the diplomats? I think you pointed out you started pointing it out. But if you can dig it a bit further, that would be great. And also, how can the industry better engage or have an incentive to engage actually, in those multilateral processes, where governments are sitting and discussing the protection of critical infrastructure?


Lars Erik Smevold: Yeah, from my perspective, and our perspective. It’s definitely important to collaborate more with the diplomats and diplomacy to get a better common understanding of what’s actually needed and what type of resources are needed and how much time, things, and what it actually takes to do. So to have some arenas that we can actually meet, talk, not that formal in a way, I will say, because that makes it more easier and comfortable to speak out in a better way. Today I brought out my white shirt. I tried to adapt to Floreta. I think that is a start. And maybe sometimes I will shortly invite Floreta and others to be part on a trip for our sake. There are some of our, maybe some plans or some that are available, and talk to our specialists and technicians, because that will definitely help you and others to understand at the same time the other way around. What is your work going on? What can we help you with on your way? Because, as was mentioned before, the arenas that we can actually meet and get a better actual understanding of what critical infrastructure are, is very, very important. Because sometimes there are so on a high level discussions. So the people down, sorry, on the ground, they do not feel any, does that hit me actually, or does it? So the right arenas, cross-sectional with the diplomats, but also internally in the countries, cross-sectoral. General Weiss, and also over borders, because in the electricity community we have the NSOE in Europe, the interest group for TSOs, but we also have SIGRE, that is a global interest organization, that also have cyber security on topic. But these different arenas, maybe we sometimes from the cyber security technical perspective can go to these arenas and talk more, and the same from the diplomacy and IT community also. Do we get a better understanding of electricity, water, and other types?


Marie Humeau: Thank you very much, Lars. And thankfully I have a white and blue shirt, so I’m not sitting in between the two of you. And also I’m wearing sneakers, you can’t see, but I’m not that formal. So I think one of the important things is exactly this, that one understands the other, but it’s not only for one side to come to the diplomatic arena, it’s also for the diplomats to concretely understand what your needs are and how you operate on a daily basis. And actually to create this environment of trust and to be down to earth. Pavel, I’m going to jump to you to maybe look at how the UN framework can actually be more practical and to protect critical infrastructure. How can we actually follow what just Lars said and be more practical and down to earth and to better understand each other to make sure that we create this trusted environment? Pavel, over to you.


Pavel Mraz: Thank you so much, Marie. The UN framework for responsible state behavior in cyberspace, it has been mentioned by Florida, it has been mentioned by Caroline. It does provide a strong foundation for protecting critical infrastructure. At the core of this framework are agreed voluntary cyber norms, something that all states have committed to do, notably norm F, which affirms that states should not conduct or support any ICT activity that intentionally damages It’s Practically Implemented. And some things that are currently being done at the UN and global level is countries are designating points of contacts globally for crisis communication in recognition that you cannot exchange business cards in a hurricane when a real cyber crisis hits, and you need assistance from abroad, whether it’s assistance from the private sector or another member states if the malicious activity is emanating from outside of your own territory. You need to have all these channels, the trust, and the network already in place to know where to reach out. Of course, there is another challenge here, and that is when we do capacity building in developing countries, we often see this mindset of cybersecurity being an IT department problem or national cybersecurity agency problem. And here is where the tabletop exercises simulating real crisis really come into focus because bringing in all the decision makers and demonstrating that when critical services are down, whether it’s energy, water or health care, it is far broader as a problem than a problem for a national cybersecurity agency. So that really helps bring people together, as Carolyn said, and we have seen this on the ground. So in order for the UN framework to have a real world impact and not remain just on paper, it must be operationalized nationally through legislation, institutional coordination, but also sustained investment in cybersecurity that needs to be supported not only by the technical community. but also by the political decision makers in a country. It must be inclusive, involving also technical experts, civil society and the private sector, in order words, all the stakeholders that have a role to play in protecting critical infrastructure. And of course it should be backed by practical capacity-building. I will leave it at that, in the interest of time and


Marie Humeau: over back to you. Thank you. So I think, Floretta, I will give you the floor and I would like to keep a few minutes for questions, if there are any, and also for Akil at the end to wrap up all those information that we gathered. But you are the perfect link between the diplomat and the technical. You’re a diplomat, you’re sitting in the technical organization, you’ve been part of the UN discussion, you’re also part of the Women in Cyber Fellowship. Maybe you can give us a bit of, very quickly, your view on how to bridge those different communities and how to ensure that each community understands and engages with one another.


Floreta Faber: As it was said here, it is absolutely crucial that those communities talk to each other. As I mentioned, Albania has taken a number of reforms on trying to bring the best what you can do in a country, in the cyber ecosystem, in order to reach the best results. Unfortunately, only the countries that have had big attacks, kind of, have learned the lesson. But as we always try to say in cyber security, it’s like in a football match. You can be the best team in the world, you always try and make the training in order, when there is a game, you don’t have a goal. But sometimes, even if you are the best and you have the best players, you still have the goal from the other side. So it’s the same on cyber security. You prepare, you believe you have the best team in protecting you, but sometimes, you know, there are circumstances when the attacks can hit you. So this is the moment. where we all train, when we all talk in a peacetime, when there is not a hurricane, in order to be responsive. That’s why those communities need to talk to each other, because the crisis can be internal to that organization, which can be big. It can spill out in the society, but it can also become an international issue. And especially when it becomes an international issue, it’s the diplomatic community who do the talks. Now, the UN is one of the best examples, and OSCE and other organizations, that can bring together always diplomat and technical communities. And that’s actually one way to talk to each other. There are fellowships, like the Women in Cyber Fellowship, which I have been part of, but there was a UN Singapore Fellowship and other numerous fellowships supported from the UN, where you see those communities be together for one week, for two weeks, on the same room, that obviously you kind of start to build that trust on talking to each other. The point of contact directory, the UN-based, it’s another step how countries talk to each other. But on the daily basis, as you said, it’s really important that we all speak with a critical and important infrastructure. We maybe have the luxury of being a small country. We’re going to have over 200 critical and important infrastructures. In some countries, there are a few thousand. But we all have to find a way, either through clusters or through sectors, that they talk to each other, they talk with the national authority on cybersecurity, and they understand why it’s important that not only local, but also international connections are very important. We have put together a new strategy on cybersecurity, which is also one of the sub-laws which need to be passed in a matter of a week or two. And in Albania, there are two main points where we focus, supporting the critical and important infrastructure, and also awareness and support for children being safe online, but awareness to every level of society, underrepresented groups, SMEs, you know, all groups who otherwise do not hear about cybersecurity. But one of the five pillars of the strategy, it is the international cooperation. In some countries, international cooperation is important because we do not have the means and the opportunities and the money to really invest in cybersecurity, and the international support is very important in this case. But we also need the international support because we need to be connected. It is a world where we need to speak freely to each other, and when it comes to cybersecurity, there is no border. You know, the attack can have an effect in one country, go to other countries. You know, it can be a European or, I don’t know, a U.S. organization or a company who have branches around a number of countries, and one hit can hit, you know, several countries all in once. So that’s why it’s important. Another thing we have tried is exactly this, bring an experienced diplomat inside a technical organization. It was for me to understand first, what would I do in an organization like this if I come with, you know, at least two years of experience working on cyber diplomacy? But now I understand that maybe this is… Singapore has it as an example. They have a team which works… which have one leadership but two groups, one with the Ministry of Foreign Affairs or Communication, as they call it, and one with a technical group, understanding that there should be a very strong link between the organizations. We kind of started doing this, and it works perfectly because the translation is very important with the internationals, with the diplomatic community. But also everything the technical groups have done you translate it in the way you presented to your bosses to the government to the prime minister to people who want to know what happened because if you go too technical they want it’s normally you know they it’s a it’s a different language but the point is people need to understand in their own language what has going on and how they should be prepared. So this link is very important and I believe every country one way or another is trying to take steps in this direction.


Marie Humeau: Thank you Floreta. So Caroline I give you the floor for one minute and then I will keep two minutes for a question from the audience here and then two minutes to Akil for to wrap up. But Floreta you mentioned international cooperation is key so maybe Caroline very shortly you can you can share some of maybe some cooperation models that have been proven to be very effective that could be like basis for best practices and and how to can it like provide some some ideas for future discussion in the UN. Thanks and yeah for the sake of time I won’t


Caroline Troein: share stories from we did a tabletop exercise with UNIDIR and UNODA for the point of contacts directory that Pavel mentioned. I’ll just summarize and say often felt like the technical and diplomatic contacts were operating from completely different playbooks. So more coordination is definitely needed here and I want to note that we want we should note that coordination needs to happen at the national regional and global levels because that a lot of coordination efforts are either concentrated on the diplomatic or the technical levels and we need those cross-cutting aspects. So to just quickly mention a few models of course there’s the ASEAN certain maturity framework, MISA, OAS is a very successful model, OIC, they’re all driving coordination.


Marie Humeau: Thank you very much, Caroline. So I just want to check with the audience if there is a very burning question. If not, I do have one, but in the sake of time, yes, please.


Participant: Hello, my name is Eirik, I work for the IT company owned by the Church of Norway. Just interested in sharing more sensitive data across borders, because when you are a technical person, you sometimes get technical information. You don’t necessarily want to go public, but you still want to share it with other technical people so that they can defend their systems better. How can we make arrangements for that?


Floreta Faber: This is part of building trust with the people you work with. And in the Western Balkans, there is a region where the technical communities, different ways and different formats, try to be in contact with each other, either starting with WhatsApp, with groups of emails, with the platforms that we’re using to share weekly the information. We are also trying another way. It’s a long-term investment, we believe. We have started a cyber camp of young people in the region. And we are building an alumni group of people who go on cyber security. So, for the first time, they met when they were 20, 21. And we believe that in each country, since they come together on the same cyber camps every year, they still meet in alumni group, who was the first year, the second year. For the first time, we did the alumni last year online. We’re going to do this in person. And we try to build the trust, really, from the young age, because we believe those are things which take time. And sometimes they prevent you not talking to each other for different trust reasons, which are not only cyber security. And in order to overcome those, we’re trying all the best way possible, practically, how to really build the communities regionally all together. Very good.


Marie Humeau: Okay, great question. I think we could now talk about this for 20 minutes. I think Lars was willing to answer. But I will give like 30 seconds, nearly one minute, but I think we are cut short of time. But like very, very briefly, Akhil, if you can wrap up the entire hour of discussion that we had. Thank you. And you have the last word.


Mr. Akhil Thomas: Thank you, Marie. Well, as you said, I got the last word, which is a slightly unfair advantage of going last, which means that I get to sound smart by summarizing all the great points that were shared here. So let me try to do justice to that in just two minutes. Well, firstly, thank you to our panelists and participants, both on site and online. Key takeaways from today’s session underscore that collaboration is non-negotiable, whether it’s bridging diplomatic technical divides, strengthening cert-to-cert cooperation, or fostering public-private partnerships, silos are a luxury. We heard from Floreta that resilience is both a mindset and a systemic effort, rooted in governance, funding, and international collaboration. Lars highlighted the energy sector’s resilience on cross-border teamwork, where regular drills and shared awareness are vital. Kimia reminded us that while the private sector is innovating with zero trust and threat intelligence, what’s needed now to reduce fragmentation is smarter policy, not necessarily more regulation. Caroline emphasized the ITU’s role in building third capacity through cyber drills, peer learning, and stressing that resilience requires legal mandates and cross-cutting coordination at all levels, national to global. And Pavel mapped the alarming scale of threats from ransomware to space infrastructure and the urgent need to turn UN norms into action through practical tools like crisis exercises, POCs, and inclusive capacity building. Three themes came through very clearly. Preparation through exercises, clear protocols, and strong leadership. Inclusivity, making sure that governments, industry, and civil society all have a seat at the table. And shared responsibility, recognizing that threats cascade across borders and no single actor can secure critical infrastructure alone. As we conclude, I encourage everyone to carry forward today’s calls to action, concrete partnerships, actionable frameworks, and sustained dialogue. Thank you again for your insights and wishing you all a meaningful and productive time at IGF. Over to you, Marie.


Marie Humeau: Thank you. I’m just like closing. So thank you very much. We are running out of time. It has been very, I would like to thank the panelists and I’ll give the floor back to the next panel.


P

Pavel Mraz

Speech speed

197 words per minute

Speech length

1301 words

Speech time

394 seconds

Nearly 40% of state cyber operations target critical infrastructure including energy, healthcare, finance, water, and telecommunications

Explanation

Pavel Mraz highlighted that critical infrastructure has become both an attractive target for financially motivated actors and a strategic target for state-affiliated actors. This represents a significant portion of documented cyber operations by states in 2024.


Evidence

UNIDIR research report summarizing main threats of 2024 in cyberspace shows nearly 40% of all documented cyber operations by states focused on critical infrastructure sectors


Major discussion point

Current Cyber Threat Landscape for Critical Infrastructure


Topics

Cybersecurity


Agreed with

– Ms. Timea Suto

Agreed on

Critical infrastructure faces increasingly complex and diverse threats


Ransomware attacks surged by 275% with global financial losses exceeding $10 trillion, making cybercrime equivalent to the world’s third largest economy

Explanation

Pavel Mraz presented alarming statistics showing a massive surge in ransomware attacks and their economic impact. He used the comparison to national economies to illustrate the scale of cybercrime’s financial impact globally.


Evidence

275% surge in ransomware attacks in 2024, global financial losses from cybercrime disruptions exceeded US$10 trillion, making cybercrime equivalent to the third world’s largest economy by GDP


Major discussion point

Current Cyber Threat Landscape for Critical Infrastructure


Topics

Cybersecurity | Economic


Supply chain attacks are becoming more prominent, leveraging “target one, compromise many” principle to reach downstream customers

Explanation

Pavel Mraz explained how malicious cyber actors are increasingly using supply chain attacks as an efficient method to target multiple victims. This approach allows attackers to compromise many organizations by targeting a single point in the supply chain.


Evidence

Attacks on digital supply chains leveraging the principle of ‘target one, compromise many’ to target downstream customers, including critical infrastructure operators


Major discussion point

Current Cyber Threat Landscape for Critical Infrastructure


Topics

Cybersecurity


UN framework provides foundation through voluntary cyber norms, particularly norm F prohibiting attacks on critical infrastructure

Explanation

Pavel Mraz outlined how the UN framework for responsible state behavior in cyberspace provides a strong foundation for protecting critical infrastructure. He specifically mentioned norm F which commits states not to conduct or support ICT activities that intentionally damage critical infrastructure.


Evidence

UN framework includes agreed voluntary cyber norms, notably norm F which affirms that states should not conduct or support any ICT activity that intentionally damages critical infrastructure


Major discussion point

International Cooperation Frameworks


Topics

Cybersecurity | Legal and regulatory


Countries are designating points of contact for crisis communication, recognizing need for pre-established trust and networks

Explanation

Pavel Mraz emphasized the importance of having communication channels and trust networks established before a crisis occurs. He noted that countries cannot exchange business cards during a cyber hurricane and need assistance channels ready in advance.


Evidence

Countries are designating points of contacts globally for crisis communication, recognizing that you cannot exchange business cards in a hurricane when a real cyber crisis hits


Major discussion point

International Cooperation Frameworks


Topics

Cybersecurity


Agreed with

– Floreta Faber
– Caroline Troein

Agreed on

Trust-building is essential for effective information sharing and cooperation


Tabletop exercises help demonstrate that critical infrastructure attacks are broader problems than just IT department issues

Explanation

Pavel Mraz explained how tabletop exercises are effective in showing decision makers that when critical services like energy, water or healthcare are down, the problem extends far beyond what a national cybersecurity agency can handle alone. This helps bring different stakeholders together.


Evidence

Tabletop exercises simulating real crisis help bring decision makers together by demonstrating that when critical services are down, it is far broader as a problem than a problem for a national cybersecurity agency


Major discussion point

International Cooperation Frameworks


Topics

Cybersecurity | Development


Agreed with

– Floreta Faber
– Caroline Troein

Agreed on

Capacity building and training are fundamental to cybersecurity resilience


International cooperation must be operationalized through legislation, institutional coordination, and sustained investment

Explanation

Pavel Mraz argued that for the UN framework to have real-world impact and not remain just on paper, it must be implemented practically at the national level. This requires comprehensive approaches involving multiple stakeholders and sustained commitment.


Evidence

UN framework must be operationalized nationally through legislation, institutional coordination, sustained investment in cybersecurity, and must be inclusive involving technical experts, civil society and private sector


Major discussion point

International Cooperation Frameworks


Topics

Cybersecurity | Legal and regulatory | Development


Agreed with

– Ms. Timea Suto
– Floreta Faber
– Caroline Troein
– Lars Erik Smevold

Agreed on

Multi-stakeholder collaboration is essential for cybersecurity


M

Ms. Timea Suto

Speech speed

157 words per minute

Speech length

1166 words

Speech time

444 seconds

Critical infrastructure faces threats from state-nexus actors, organized cybercriminal ecosystems, and insider threats from employees or contractors

Explanation

Timea Suto outlined the diverse threat landscape facing critical infrastructure, categorizing threats into three main types. She explained how each type has different motivations and capabilities, from well-funded government-supported APTs to organized criminal groups and internal threats from people with privileged access.


Evidence

State nexus threat actors (APTs) are well-funded and capable of long-term complex operations; cybercriminal ecosystems are globally distributed and resilient; insider threats include malicious or negligent employees and contractors with privileged access


Major discussion point

Current Cyber Threat Landscape for Critical Infrastructure


Topics

Cybersecurity


Agreed with

– Pavel Mraz

Agreed on

Critical infrastructure faces increasingly complex and diverse threats


Private sector is investing in zero-trust architectures, vulnerability management, supply chain assessments, and incident response plans

Explanation

Timea Suto described how the private sector is actively responding to cyber threats by making significant investments in cybersecurity resilience. She outlined various technical and procedural measures that companies are adopting to strengthen their defenses.


Evidence

Growing adoption of zero-trust architectures, continuous patching and vulnerability management, strong data backups, supply chain risk assessments, robust incident response plans, and embedding cybersecurity by design


Major discussion point

Private Sector Challenges and Needs


Topics

Cybersecurity | Economic


Even well-funded private entities cannot deter state-sponsored actors or dismantle global criminal networks alone

Explanation

Timea Suto emphasized the limitations of what private sector can achieve independently, regardless of their resources. She argued that cybersecurity for critical infrastructure is a shared responsibility that requires government involvement in addressing threats beyond private sector capabilities.


Evidence

Even the best-funded private entities cannot deter state-sponsored actors or take down global criminal networks on their own; cybersecurity is a shared responsibility between government and industry


Major discussion point

Private Sector Challenges and Needs


Topics

Cybersecurity


Agreed with

– Pavel Mraz
– Floreta Faber
– Caroline Troein
– Lars Erik Smevold

Agreed on

Multi-stakeholder collaboration is essential for cybersecurity


Industry needs smarter policy focused on incentives rather than more regulation, with rebalanced responsibility between private and public sectors

Explanation

Timea Suto advocated for a policy approach that emphasizes creating the right incentives for cybersecurity investment rather than imposing more regulatory burdens. She argued for rebalancing responsibilities, recognizing that security for socially critical infrastructure shouldn’t be solely a private burden.


Evidence

Focus less on control and more on creating right incentives for cybersecurity investment; governments must recognize that security for socially critical infrastructure is not solely a private burden; need public investment, fiscal support, and enabling policy environments


Major discussion point

Private Sector Challenges and Needs


Topics

Cybersecurity | Legal and regulatory | Economic


Disagreed with

– Floreta Faber

Disagreed on

Regulatory approach to private sector cybersecurity


Critical infrastructure protection requires inclusive policymaking processes involving all stakeholders

Explanation

Timea Suto concluded with the principle that effective cybersecurity outcomes require inclusive policymaking processes. She emphasized that all relevant stakeholders must be involved in developing policies for protecting critical infrastructure.


Evidence

If we want effective cybersecurity outcomes, we need inclusive policymaking processes


Major discussion point

Private Sector Challenges and Needs


Topics

Cybersecurity | Legal and regulatory


F

Floreta Faber

Speech speed

147 words per minute

Speech length

2226 words

Speech time

904 seconds

Albania’s 2022 cyber attack on e-government services revealed that cybersecurity is about mindset and involving all people, not just technology

Explanation

Floreta Faber shared Albania’s experience with a major cyber attack that targeted their extensive e-government services. She explained how this incident served as a wake-up call, revealing that cybersecurity success depends on changing organizational mindset and involving everyone, not just focusing on technological solutions.


Evidence

Albania had over 1,200 e-services with 95% of citizen services online when attacked in mid-2022; the attack was a wake-up call showing cybersecurity is about mindset and involving people from top management to simple employees


Major discussion point

National Experiences and Lessons Learned


Topics

Cybersecurity | Development


Cybersecurity requires investment in both technology and capacity building, with awareness training for all employees from top management to simple workers

Explanation

Floreta Faber emphasized that effective cybersecurity requires a dual approach combining technological investments with comprehensive human capacity building. She stressed that everyone in an organization needs proper training and awareness, as one mistake by any person can allow a simple attack to become a major incident.


Evidence

Investment needed in technology but capacity building is also important for training people; people who are not technical need the right mindset and awareness that even one mistake by one person can allow a simple attack to become a big incident


Major discussion point

National Experiences and Lessons Learned


Topics

Cybersecurity | Development


Agreed with

– Pavel Mraz
– Caroline Troein

Agreed on

Capacity building and training are fundamental to cybersecurity resilience


Albania increased cybersecurity authority staff from 20 to 85 people and expanded critical infrastructure list by 50% following attacks

Explanation

Floreta Faber detailed the concrete organizational and regulatory changes Albania made in response to cyber attacks. These changes included significant expansion of human resources and updating their approach to identifying critical infrastructure according to new EU directives.


Evidence

Authority staff increased from about 20 people to 85 people; new law on cybersecurity in 2024 according to NIS2 directive; critical infrastructure list increased by 50% with new methodology


Major discussion point

National Experiences and Lessons Learned


Topics

Cybersecurity | Legal and regulatory | Development


Disagreed with

– Ms. Timea Suto

Disagreed on

Regulatory approach to private sector cybersecurity


Regular cyber drills help build understanding between stakeholders and create trust for sharing sensitive information

Explanation

Floreta Faber explained how Albania conducts regular exercises and drills to improve coordination between different stakeholders. She emphasized that these activities help build the trust necessary for effective information sharing and collaborative response to cyber incidents.


Evidence

Albania has had over 80 attempts and 32 attacks that became incidents last year, dealing with all cases successfully; regular exercises help build trust between technical teams and different organizations


Major discussion point

National Experiences and Lessons Learned


Topics

Cybersecurity | Development


Bringing experienced diplomats into technical organizations creates important translation between communities

Explanation

Floreta Faber shared her personal experience as a diplomat working within a technical cybersecurity organization. She explained how this arrangement helps bridge the gap between diplomatic and technical communities by providing necessary translation and communication between different audiences.


Evidence

Singapore has a team with one leadership but two groups, one with Ministry of Foreign Affairs and one technical group; bringing experienced diplomat inside technical organization helps translate between communities and present technical work to government leaders


Major discussion point

Bridging Technical and Diplomatic Communities


Topics

Cybersecurity


Agreed with

– Pavel Mraz
– Ms. Timea Suto
– Caroline Troein
– Lars Erik Smevold

Agreed on

Multi-stakeholder collaboration is essential for cybersecurity


Building trust requires long-term investment including regional cooperation and youth engagement through cyber camps

Explanation

Floreta Faber described Albania’s approach to building long-term trust and cooperation in the Western Balkans region. She explained their strategy of investing in youth through cyber camps to create lasting professional relationships and trust networks that will benefit future cybersecurity cooperation.


Evidence

Western Balkans technical communities stay in contact through WhatsApp groups, emails, and platforms for weekly information sharing; cyber camp for young people creates alumni groups who meet annually to build trust from young age


Major discussion point

Bridging Technical and Diplomatic Communities


Topics

Cybersecurity | Development


Agreed with

– Pavel Mraz
– Caroline Troein

Agreed on

Trust-building is essential for effective information sharing and cooperation


Regional cooperation can start with informal communication channels like WhatsApp groups and email platforms for weekly information sharing

Explanation

Floreta Faber provided practical examples of how technical communities can begin sharing sensitive threat information across borders. She described informal but effective communication methods that help build trust and enable regular information exchange between cybersecurity professionals.


Evidence

Western Balkans technical communities use WhatsApp groups, email groups, and platforms for sharing weekly information; Albania sends information every Friday that can be made public and shared with other CERTs


Major discussion point

Practical Information Sharing Challenges


Topics

Cybersecurity


Trust-building requires sustained engagement and can be developed through alumni networks of cybersecurity professionals

Explanation

Floreta Faber explained their long-term strategy for building professional trust networks through sustained engagement programs. She described how creating alumni networks of cybersecurity professionals who first meet at young ages can overcome political and trust barriers that might otherwise prevent cooperation.


Evidence

Cyber camp alumni groups where people first meet at age 20-21 and continue meeting annually; first alumni meeting was online, planning in-person meetings; trying to build trust from young age because trust-building takes time


Major discussion point

Practical Information Sharing Challenges


Topics

Cybersecurity | Development


C

Caroline Troein

Speech speed

161 words per minute

Speech length

1079 words

Speech time

400 seconds

Countries now have more cybersecurity measures than ever before including laws, technical capabilities, strategies, and training programs

Explanation

Caroline Troein provided a positive perspective on global cybersecurity progress, citing ITU’s Global Cybersecurity Index findings. She noted that while challenges are increasing, countries are also implementing more comprehensive cybersecurity measures across multiple dimensions.


Evidence

According to ITU’s Global Cybersecurity Index, countries have more cybersecurity measures in place than ever before including more laws, technical capabilities, strategies, trainings, and cooperation


Major discussion point

Role of Capacity Building and Technical Cooperation


Topics

Cybersecurity | Development | Legal and regulatory


National CERTs serve as the first line of defense and need legal mandate, operational structures, sustainable funding, and continuous training

Explanation

Caroline Troein emphasized the foundational role of national CERTs in cyber resilience, explaining that they serve as the primary defense against ICT threats targeting critical infrastructure. She outlined the essential requirements for effective CERT operations beyond just technical capabilities.


Evidence

National CERTs are foundational to cyber resilience as first line of defense; they need legal mandate, clear operational structures, sustainable funding, and continuous training; core incident response responsibilities remain with CERTs even as countries develop cybersecurity agencies


Major discussion point

Role of Capacity Building and Technical Cooperation


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Pavel Mraz
– Floreta Faber

Agreed on

Capacity building and training are fundamental to cybersecurity resilience


Cyber exercises simulate real-world attacks, test response mechanisms, and foster cross-sectoral coordination while bridging technical and non-technical communities

Explanation

Caroline Troein explained the multiple benefits of cyber exercises as capacity building tools. She emphasized how these exercises serve not only to test technical responses but also to build understanding and coordination between different stakeholder communities.


Evidence

Cyber drills simulate real world attacks, test national response mechanisms, foster cross-sectoral coordination, and help bridge the gap between technical and non-technical communities; exercises help build trust and show stakeholders their connections and dependencies


Major discussion point

Role of Capacity Building and Technical Cooperation


Topics

Cybersecurity | Development


Agreed with

– Pavel Mraz
– Floreta Faber

Agreed on

Trust-building is essential for effective information sharing and cooperation


ITU receives requests from 46 countries for cybersecurity support including CERT establishment, strategy development, and specialized training

Explanation

Caroline Troein provided concrete evidence of global demand for cybersecurity capacity building by citing the number of countries requesting ITU support. She outlined the diverse types of assistance requested, from institutional development to specialized training programs.


Evidence

46 countries have requested support from ITU in cybersecurity; support includes establishing or enhancing national CERTs, developing or updating national cybersecurity strategies, tailored trainings on various topics, and train-the-trainer programs


Major discussion point

Role of Capacity Building and Technical Cooperation


Topics

Cybersecurity | Development


Coordination needs to happen at national, regional, and global levels with cross-cutting aspects between diplomatic and technical levels

Explanation

Caroline Troein emphasized the multi-level nature of coordination required for effective cybersecurity. She noted that coordination efforts are often concentrated on either diplomatic or technical levels, but what’s needed are approaches that cut across both dimensions at all levels.


Evidence

Coordination needs to happen at national, regional and global levels; coordination efforts are often concentrated on either diplomatic or technical levels and we need cross-cutting aspects; mentioned ASEAN, MISA, OAS, OIC as successful coordination models


Major discussion point

Practical Information Sharing Challenges


Topics

Cybersecurity


Agreed with

– Pavel Mraz
– Ms. Timea Suto
– Floreta Faber
– Lars Erik Smevold

Agreed on

Multi-stakeholder collaboration is essential for cybersecurity


L

Lars Erik Smevold

Speech speed

134 words per minute

Speech length

981 words

Speech time

437 seconds

Energy sector resilience requires ability to anticipate, prepare for, respond to, recover from, and learn from disruptions

Explanation

Lars Erik Smevold defined resilience in the energy sector as a comprehensive capability that goes beyond just prevention to include full cycle management of disruptions. He emphasized that this requires involving people at all levels from operations to management and policy makers.


Evidence

Running critical infrastructure like hydropower plants, solar, wind, batteries, grid stabilizers; resilience requires involving people in the sharp end, operations, managers, and policy makers to operationalize security measures


Major discussion point

Operational Resilience in Critical Sectors


Topics

Cybersecurity | Infrastructure


Cybersecurity must be adapted to cyber-physical systems where security measures on one system can affect other interconnected systems

Explanation

Lars Erik Smevold explained the complexity of securing cyber-physical systems in critical infrastructure, where traditional security measures may not be appropriate. He emphasized the need to understand system interdependencies and potential unintended consequences of security implementations.


Evidence

Security and cybersecurity are adapting into cyber physical systems; cannot put any type of security measures into any type of system because that system will affect another type of system with consequences you may not want


Major discussion point

Operational Resilience in Critical Sectors


Topics

Cybersecurity | Infrastructure


Cross-border electricity grid connections require coordinated response between Nordic and European transmission system operators

Explanation

Lars Erik Smevold highlighted the interconnected nature of electricity grids across borders and the need for coordinated cybersecurity responses. He provided specific examples of successful cooperation between Nordic transmission system operators and the importance of understanding climate change impacts.


Evidence

Electricity grids in Nordics and Europe are highly connected; Nordic TSOs did drills in 2015-2016 together with national security, regulators, and CERT teams; need to adapt to climate changes and work with other authorities


Major discussion point

Operational Resilience in Critical Sectors


Topics

Cybersecurity | Infrastructure


Technical specialists need better understanding of different critical infrastructure sectors including electricity, telecoms, and water systems

Explanation

Lars Erik Smevold emphasized the importance of cross-sectoral knowledge among technical specialists to make proper decisions during incidents. He argued that cybersecurity professionals need understanding beyond just IT and cybersecurity to include knowledge of various critical infrastructure sectors.


Evidence

Need good understanding not only of cybersecurity and IT, but also from electricity, telecoms, water and sewage, and other critical infrastructure in the mixture to make right decisions at the right time


Major discussion point

Operational Resilience in Critical Sectors


Topics

Cybersecurity | Infrastructure


Technical and diplomatic communities need informal arenas to meet and build understanding of each other’s work and resource needs

Explanation

Lars Erik Smevold advocated for creating informal meeting opportunities between technical and diplomatic communities to build mutual understanding. He suggested that less formal settings make it easier and more comfortable for both sides to communicate effectively.


Evidence

Important to collaborate more with diplomats to get common understanding of what’s needed, what resources are needed, and how much time things take; need arenas that are not too formal to make it easier and comfortable to speak


Major discussion point

Bridging Technical and Diplomatic Communities


Topics

Cybersecurity


Agreed with

– Pavel Mraz
– Ms. Timea Suto
– Floreta Faber
– Caroline Troein

Agreed on

Multi-stakeholder collaboration is essential for cybersecurity


Cross-visits between technical facilities and diplomatic offices help build mutual understanding of operational realities

Explanation

Lars Erik Smevold suggested practical approaches for building understanding between communities, including site visits to technical facilities and diplomatic offices. He emphasized the importance of both sides understanding each other’s daily work and operational constraints.


Evidence

Suggested inviting diplomats to visit power plants and technical facilities to talk to specialists and technicians; also suggested technical people visit diplomatic offices to understand their work and how they can help each other


Major discussion point

Bridging Technical and Diplomatic Communities


Topics

Cybersecurity


P

Participant

Speech speed

115 words per minute

Speech length

71 words

Speech time

36 seconds

Technical professionals need secure channels to share sensitive threat information across borders without making it public

Explanation

A participant from the Church of Norway’s IT company raised a practical question about sharing sensitive technical information across borders. They highlighted the challenge technical professionals face when they have threat information that could help others defend their systems but cannot be shared publicly.


Evidence

Works for IT company owned by Church of Norway; interested in sharing sensitive technical data across borders that technical people don’t want to go public but want to share with other technical people for defense


Major discussion point

Practical Information Sharing Challenges


Topics

Cybersecurity


M

Mr. Akhil Thomas

Speech speed

166 words per minute

Speech length

313 words

Speech time

113 seconds

M

Marie Humeau

Speech speed

156 words per minute

Speech length

1686 words

Speech time

646 seconds

Agreements

Agreement points

Multi-stakeholder collaboration is essential for cybersecurity

Speakers

– Pavel Mraz
– Ms. Timea Suto
– Floreta Faber
– Caroline Troein
– Lars Erik Smevold

Arguments

International cooperation must be operationalized through legislation, institutional coordination, and sustained investment


Even well-funded private entities cannot deter state-sponsored actors or dismantle global criminal networks alone


Bringing experienced diplomats into technical organizations creates important translation between communities


Coordination needs to happen at national, regional, and global levels with cross-cutting aspects between diplomatic and technical levels


Technical and diplomatic communities need informal arenas to meet and build understanding of each other’s work and resource needs


Summary

All speakers emphasized that cybersecurity, especially for critical infrastructure, requires collaboration across sectors, borders, and communities. No single actor can address cyber threats alone.


Topics

Cybersecurity | Development


Capacity building and training are fundamental to cybersecurity resilience

Speakers

– Pavel Mraz
– Floreta Faber
– Caroline Troein

Arguments

Tabletop exercises help demonstrate that critical infrastructure attacks are broader problems than just IT department issues


Cybersecurity requires investment in both technology and capacity building, with awareness training for all employees from top management to simple workers


National CERTs serve as the first line of defense and need legal mandate, operational structures, sustainable funding, and continuous training


Summary

Speakers agreed that effective cybersecurity requires comprehensive capacity building that goes beyond technical training to include awareness at all organizational levels and practical exercises.


Topics

Cybersecurity | Development


Trust-building is essential for effective information sharing and cooperation

Speakers

– Pavel Mraz
– Floreta Faber
– Caroline Troein

Arguments

Countries are designating points of contact for crisis communication, recognizing need for pre-established trust and networks


Building trust requires long-term investment including regional cooperation and youth engagement through cyber camps


Cyber exercises simulate real-world attacks, test response mechanisms, and foster cross-sectoral coordination while bridging technical and non-technical communities


Summary

Speakers emphasized that trust must be built before crises occur and requires sustained investment in relationships and communication channels.


Topics

Cybersecurity | Development


Critical infrastructure faces increasingly complex and diverse threats

Speakers

– Pavel Mraz
– Ms. Timea Suto

Arguments

Nearly 40% of state cyber operations target critical infrastructure including energy, healthcare, finance, water, and telecommunications


Critical infrastructure faces threats from state-nexus actors, organized cybercriminal ecosystems, and insider threats from employees or contractors


Summary

Both speakers highlighted the severity and diversity of threats targeting critical infrastructure, including state actors, criminals, and insider threats.


Topics

Cybersecurity


Similar viewpoints

Both emphasized the need for balanced approaches to cybersecurity governance that involve appropriate resource allocation and smart policy rather than just regulatory burden.

Speakers

– Ms. Timea Suto
– Floreta Faber

Arguments

Industry needs smarter policy focused on incentives rather than more regulation, with rebalanced responsibility between private and public sectors


Albania increased cybersecurity authority staff from 20 to 85 people and expanded critical infrastructure list by 50% following attacks


Topics

Cybersecurity | Legal and regulatory | Economic


Both speakers emphasized the importance of regular exercises and cross-border coordination, drawing from their practical experience in managing critical infrastructure.

Speakers

– Lars Erik Smevold
– Floreta Faber

Arguments

Cross-border electricity grid connections require coordinated response between Nordic and European transmission system operators


Regular cyber drills help build understanding between stakeholders and create trust for sharing sensitive information


Topics

Cybersecurity | Infrastructure


Both speakers highlighted the global demand for cybersecurity capacity building and the effectiveness of practical exercises in building understanding across communities.

Speakers

– Caroline Troein
– Pavel Mraz

Arguments

ITU receives requests from 46 countries for cybersecurity support including CERT establishment, strategy development, and specialized training


Tabletop exercises help demonstrate that critical infrastructure attacks are broader problems than just IT department issues


Topics

Cybersecurity | Development


Unexpected consensus

Informal communication channels are as important as formal frameworks

Speakers

– Floreta Faber
– Lars Erik Smevold

Arguments

Regional cooperation can start with informal communication channels like WhatsApp groups and email platforms for weekly information sharing


Technical and diplomatic communities need informal arenas to meet and build understanding of each other’s work and resource needs


Explanation

It was unexpected to see both a diplomat and a technical expert emphasize the importance of informal communication channels like WhatsApp groups alongside formal diplomatic and technical frameworks. This suggests that practical, everyday communication tools are recognized as vital for cybersecurity cooperation.


Topics

Cybersecurity


Long-term youth engagement as a cybersecurity strategy

Speakers

– Floreta Faber

Arguments

Trust-building requires sustained engagement and can be developed through alumni networks of cybersecurity professionals


Explanation

The emphasis on building cybersecurity cooperation through youth engagement and alumni networks represents an unexpected long-term strategic approach that goes beyond traditional diplomatic or technical cooperation models.


Topics

Cybersecurity | Development


Overall assessment

Summary

The speakers demonstrated remarkable consensus on the need for multi-stakeholder collaboration, capacity building, trust-building, and the recognition that cyber threats to critical infrastructure are complex and require coordinated responses. There was strong agreement on the limitations of single-actor approaches and the importance of both formal and informal cooperation mechanisms.


Consensus level

High level of consensus with practical implications for cybersecurity policy. The agreement suggests that the cybersecurity community has matured in its understanding that technical solutions alone are insufficient, and that sustainable cybersecurity requires investment in human relationships, institutional cooperation, and long-term capacity building across all stakeholder groups.


Differences

Different viewpoints

Regulatory approach to private sector cybersecurity

Speakers

– Ms. Timea Suto
– Floreta Faber

Arguments

Industry needs smarter policy focused on incentives rather than more regulation, with rebalanced responsibility between private and public sectors


Albania increased cybersecurity authority staff from 20 to 85 people and expanded critical infrastructure list by 50% following attacks


Summary

Timea advocates for less regulation and more incentives for private sector, emphasizing that security shouldn’t be solely a private burden. Floreta’s experience shows Albania’s response involved significant regulatory expansion and increased government oversight of critical infrastructure.


Topics

Cybersecurity | Legal and regulatory | Economic


Unexpected differences

Role of government regulation in critical infrastructure protection

Speakers

– Ms. Timea Suto
– Floreta Faber

Arguments

Industry needs smarter policy focused on incentives rather than more regulation, with rebalanced responsibility between private and public sectors


Albania increased cybersecurity authority staff from 20 to 85 people and expanded critical infrastructure list by 50% following attacks


Explanation

This disagreement is unexpected because both speakers represent the need for stronger critical infrastructure protection, yet they have fundamentally different views on government’s role. Timea, from private sector perspective, argues against more regulation while Floreta’s practical experience led to significant regulatory expansion. This reveals a tension between private sector preferences and real-world government responses to cyber incidents.


Topics

Cybersecurity | Legal and regulatory | Economic


Overall assessment

Summary

The discussion showed remarkable consensus on the nature of threats and the need for cooperation, with limited but significant disagreement on regulatory approaches and implementation methods


Disagreement level

Low to moderate disagreement level. Most speakers agreed on fundamental challenges and goals, but differed on specific approaches to regulation and implementation. The main tension was between private sector preference for incentive-based policies versus government experience favoring regulatory expansion. This disagreement has significant implications as it reflects the ongoing global debate about how to balance private sector autonomy with government oversight in critical infrastructure protection.


Partial agreements

Partial agreements

Similar viewpoints

Both emphasized the need for balanced approaches to cybersecurity governance that involve appropriate resource allocation and smart policy rather than just regulatory burden.

Speakers

– Ms. Timea Suto
– Floreta Faber

Arguments

Industry needs smarter policy focused on incentives rather than more regulation, with rebalanced responsibility between private and public sectors


Albania increased cybersecurity authority staff from 20 to 85 people and expanded critical infrastructure list by 50% following attacks


Topics

Cybersecurity | Legal and regulatory | Economic


Both speakers emphasized the importance of regular exercises and cross-border coordination, drawing from their practical experience in managing critical infrastructure.

Speakers

– Lars Erik Smevold
– Floreta Faber

Arguments

Cross-border electricity grid connections require coordinated response between Nordic and European transmission system operators


Regular cyber drills help build understanding between stakeholders and create trust for sharing sensitive information


Topics

Cybersecurity | Infrastructure


Both speakers highlighted the global demand for cybersecurity capacity building and the effectiveness of practical exercises in building understanding across communities.

Speakers

– Caroline Troein
– Pavel Mraz

Arguments

ITU receives requests from 46 countries for cybersecurity support including CERT establishment, strategy development, and specialized training


Tabletop exercises help demonstrate that critical infrastructure attacks are broader problems than just IT department issues


Topics

Cybersecurity | Development


Takeaways

Key takeaways

Cybersecurity for critical infrastructure requires a multi-stakeholder approach involving governments, private sector, technical communities, and diplomatic communities working together


Cyber resilience is fundamentally about mindset and people, not just technology – requiring awareness and training from top management to individual employees


The threat landscape is escalating rapidly with nearly 40% of state cyber operations targeting critical infrastructure and ransomware attacks surging 275%


No single actor can secure critical infrastructure alone – shared responsibility between public and private sectors is essential


Trust-building between different communities (technical, diplomatic, operational) is crucial and requires sustained long-term investment


Practical cooperation mechanisms like cyber drills, tabletop exercises, and informal communication channels are vital for building operational resilience


International frameworks like UN cyber norms must be operationalized through national legislation, institutional coordination, and practical capacity building


Cross-border coordination is essential given the interconnected nature of critical infrastructure, especially in sectors like energy and telecommunications


Resolutions and action items

Countries should designate points of contact for crisis communication and establish pre-crisis trust networks


Technical and diplomatic communities need more informal meeting opportunities to build mutual understanding


Implementation of cross-visits between technical facilities and diplomatic offices to understand operational realities


Development of secure channels for sharing sensitive threat information across borders between technical professionals


Strengthening of regional cooperation through platforms like CERT-to-CERT information sharing


Investment in long-term trust-building initiatives including youth engagement through cyber camps and alumni networks


Translation of UN cyber norms into practical national frameworks with clear legal mandates and operational structures


Unresolved issues

How to effectively share sensitive technical threat information across borders while maintaining security


Balancing regulatory requirements with operational flexibility for private sector critical infrastructure operators


Addressing the fragmentation of critical infrastructure definitions and frameworks across different countries


Scaling cybersecurity capacity building to meet the needs of 46+ countries requesting ITU support


Ensuring adequate funding and resources for expanding cybersecurity authorities and capabilities


Managing the complexity of interdependent critical infrastructure systems where security measures on one system can affect others


Bridging the maturity gap between well-resourced critical infrastructure operators and smaller companies in supply chains


Suggested compromises

Focus on ‘smarter policy’ with incentives for cybersecurity investment rather than additional regulatory burdens


Rebalance responsibility between private and public sectors, with governments taking more active role in disrupting threat actors while private sector focuses on operational security


Use flexible frameworks and voluntary standards that allow companies to adapt quickly to emerging threats while meeting regulatory requirements


Implement inclusive policymaking processes that give all stakeholders a seat at the table rather than top-down regulatory approaches


Combine formal diplomatic channels with informal technical cooperation mechanisms to bridge different community cultures and working styles


Thought provoking comments

We understood that talking about cyber security it’s not talking about technology, it’s talking about a mindset, it’s talking involving more people from the top management to the simple employee inside every organization that cyber security is something everyone needs to focus on.

Speaker

Floreta Faber


Reason

This comment fundamentally reframes cybersecurity from a technical problem to a human and organizational challenge. It challenges the common perception that cybersecurity is solely an IT department responsibility and emphasizes the critical role of human factors and organizational culture.


Impact

This insight shifted the discussion from technical solutions to human-centered approaches. It influenced subsequent speakers to emphasize training, awareness, and cross-community collaboration. Caroline later built on this by discussing the importance of bridging technical and non-technical audiences, and Lars emphasized the need to make people ‘in the sharp end’ understand their role.


If cybercrime was a country measured by GDP, it would have, it would be the third world’s largest economy.

Speaker

Pavel Mraz


Reason

This striking analogy puts the scale of cyber threats into perspective by comparing cybercrime’s economic impact to national economies. It transforms abstract statistics into a concrete, relatable comparison that emphasizes the magnitude of the challenge.


Impact

This comment established the gravity of the threat landscape early in the discussion, setting a serious tone that influenced all subsequent contributions. It provided context for why the collaborative approaches discussed later are not just beneficial but absolutely necessary given the scale of the challenge.


It’s not more regulation, but smarter policy. Focus less on control and more on creating the right incentives for cybersecurity investment.

Speaker

Timea Suto


Reason

This comment challenges the conventional regulatory approach to cybersecurity and proposes a paradigm shift from compliance-based to incentive-based policy frameworks. It addresses a fundamental tension between government oversight and private sector innovation.


Impact

This insight introduced a nuanced policy perspective that moved the discussion beyond simple public-private cooperation to examining the quality and nature of policy interventions. It influenced the later discussion about the need for ‘inclusive policymaking processes’ and shaped the conversation about sustainable approaches to critical infrastructure protection.


You cannot exchange business cards in a hurricane when a real cyber crisis hits, and you need assistance from abroad… You need to have all these channels, the trust, and the network already in place to know where to reach out.

Speaker

Pavel Mraz


Reason

This vivid metaphor illustrates the critical importance of pre-established relationships and communication channels in crisis management. It emphasizes that crisis response preparation must happen during peacetime, not during emergencies.


Impact

This comment reinforced the importance of proactive relationship-building and influenced the discussion toward practical cooperation mechanisms. It connected with Floreta’s later emphasis on building trust ‘from a young age’ through initiatives like cyber camps, and supported the overall theme of sustained, long-term collaboration rather than ad-hoc responses.


Many of the issues that developing countries are facing are ones that developed countries are facing. Are you being agile? Do you have the right people in the right places? Are the stakeholders actually coordinating?

Speaker

Caroline Troein


Reason

This comment challenges the traditional developed/developing country dichotomy in cybersecurity discussions and identifies universal challenges that transcend economic development levels. It reframes capacity building as a shared global challenge rather than a one-way transfer.


Impact

This insight shifted the conversation from a donor-recipient model to a more collaborative, peer-learning approach. It influenced the discussion toward recognizing that all countries face similar fundamental challenges in coordination, agility, and human resources, regardless of their development status.


We have started a cyber camp of young people in the region… we believe those are things which take time. And sometimes they prevent you not talking to each other for different trust reasons, which are not only cyber security.

Speaker

Floreta Faber


Reason

This comment introduces a long-term, generational approach to building trust and cooperation that acknowledges non-technical barriers to collaboration. It recognizes that geopolitical and historical tensions can impede technical cooperation and proposes a creative solution.


Impact

This insight added a temporal dimension to the discussion, emphasizing that effective cooperation requires sustained, long-term investment in relationships. It influenced the conversation toward recognizing that technical cooperation cannot be separated from broader political and social contexts, and that innovative approaches are needed to overcome these barriers.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional approaches and introducing more nuanced perspectives. Floreta’s reframing of cybersecurity as a mindset rather than just technology set the tone for a human-centered discussion throughout. Pavel’s economic comparison and crisis metaphor established both the scale of the challenge and the urgency of proactive cooperation. Timea’s call for ‘smarter policy’ introduced a sophisticated policy framework that moved beyond simple regulatory approaches. Caroline’s observation about universal challenges across development levels democratized the discussion and promoted peer learning. Finally, Floreta’s generational approach to trust-building added a long-term strategic dimension. Together, these comments elevated the discussion from technical problem-solving to strategic, human-centered, and politically-aware approaches to cybersecurity cooperation. They created a narrative arc that moved from threat assessment to collaborative solutions, emphasizing that effective cybersecurity requires sustained investment in relationships, innovative policy approaches, and recognition of the human factors that underpin all technical systems.


Follow-up questions

How can we make arrangements for sharing sensitive technical data across borders without making it public, while still allowing technical people to defend their systems better?

Speaker

Eirik (participant from IT company owned by the Church of Norway)


Explanation

This addresses a critical gap in international cybersecurity cooperation where technical experts have valuable threat intelligence but lack secure channels to share it across borders for collective defense


How do we handle attribution when we find out where cyber attacks came from, and what do we do with this information diplomatically?

Speaker

Floreta Faber


Explanation

This highlights the challenge of translating technical attribution findings into appropriate diplomatic responses and the need for clear protocols on how to act on attribution intelligence


How do our capacities hold up when attacks are severe and target multiple infrastructures simultaneously?

Speaker

Floreta Faber


Explanation

This addresses concerns about scalability of national cyber response capabilities during coordinated or large-scale attacks affecting multiple critical infrastructure sectors


How do we prepare for what a quantum future would look like in terms of cybersecurity?

Speaker

Caroline Troein


Explanation

This identifies the need for forward-looking research and preparation for quantum computing’s impact on current cybersecurity measures and critical infrastructure protection


How can we ensure security for essential services without overburdening the companies that we rely on to operate and innovate them?

Speaker

Timea Suto


Explanation

This addresses the balance between regulatory requirements for cybersecurity and maintaining business viability, particularly for smaller companies in critical supply chains


How do we handle cyber attacks combined with other types of physical attacks simultaneously?

Speaker

Lars Erik Smevold


Explanation

This highlights the need for research and planning around hybrid attacks that combine cyber and physical elements, which could overwhelm traditional response capabilities


How can the industry better engage or have incentives to engage in multilateral processes where governments discuss protection of critical infrastructure?

Speaker

Marie Humeau (moderator)


Explanation

This addresses the gap between private sector technical expertise and international policy discussions, seeking ways to improve industry participation in global governance processes


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Main Session 2: The governance of artificial intelligence

Session at a glance

Summary

This discussion focused on the governance of artificial intelligence, examining the current landscape of AI regulation and the challenges of creating inclusive, effective frameworks. The panel, moderated by Kathleen Ziemann from the German development agency GIZ and Guilherme Canela from UNESCO, brought together representatives from the private sector, government, civil society, and international organizations to discuss how different stakeholders can collaborate on AI governance.

The panelists acknowledged that the AI governance landscape has become increasingly complex, with numerous frameworks, principles, and regulatory initiatives emerging globally, including the OECD AI principles, UNESCO’s AI ethics recommendations, the EU AI Act, and various national strategies. Melinda Claybaugh from Meta emphasized that while there is no lack of governance frameworks, there remains disagreement about what constitutes AI risks and how they should be measured, suggesting the need for broader conversations about enabling innovation alongside managing risks. Mlindi Mashologu, representing the South African government, highlighted the importance of locally relevant AI governance that addresses context-specific challenges while maintaining human rights principles and ensuring AI systems are ethical, inclusive, and accountable.

Jhalak Kakkar from the Centre for Communication Governance stressed the importance of meaningful multi-stakeholder participation in AI governance processes and argued against creating a false dichotomy between innovation and regulation, advocating for parallel development of both. Jovan Kurbalija from the Diplo Foundation called for bringing “knowledge” back into AI discussions, noting that current frameworks focus too heavily on data while overlooking the knowledge dimension of AI systems. The discussion revealed tensions between different approaches to AI governance, with some emphasising the need for more regulation and others cautioning against over-regulation that might stifle innovation.

Key themes included the democratisation of AI access, the need for transparency and explainability in AI systems, the importance of addressing bias and ensuring inclusive representation in AI development, and the challenge of balancing global coordination with local relevance. The panelists ultimately agreed on the importance of continued multi-stakeholder dialogue and the need to learn from past experiences with internet governance while avoiding previous mistakes in technology regulation.

Keypoints

Major Discussion Points:

The Current AI Governance Landscape: The panelists discussed the “blooming but fragmented” nature of AI governance, with numerous frameworks, principles, and regulations emerging globally (OECD principles, UNESCO recommendations, EU AI Act, G7 Hiroshima AI process, etc.). There was debate about whether this represents progress or creates confusion and fragmentation.

Innovation vs. Risk Management – A False Dichotomy: A central tension emerged around balancing AI innovation with risk mitigation. While some panelists argued for focusing more on enabling innovation rather than just managing risks, others contended this creates a false choice – that governance and innovation must go hand-in-hand from the beginning rather than being treated as opposing forces.

Global South Perspectives and Local Relevance: Significant emphasis was placed on ensuring AI governance is locally relevant and includes voices from the Global South. Panelists discussed the need for context-aware regulation, capacity building in developing countries, and avoiding a “one-size-fits-all” approach that might not address specific regional needs and priorities.

Knowledge vs. Data in AI Governance: A philosophical discussion emerged about shifting focus from “data” back to “knowledge” in AI governance frameworks. This included concerns about knowledge attribution, preserving local and indigenous knowledge, and ensuring that AI systems don’t centralize and monopolize human knowledge without proper attribution.

Multi-stakeholder Participation and Transparency: Throughout the discussion, panelists emphasized the importance of meaningful multi-stakeholder engagement in AI governance processes, moving beyond tokenistic participation to genuine influence on outcomes. This included calls for transparency in risk assessments and decision-making processes.

Overall Purpose:

The discussion aimed to examine how different stakeholders can collaborate to shape AI governance frameworks that are inclusive, effective, and globally coordinated while respecting local contexts. The session sought to move beyond theoretical principles toward practical approaches for implementing AI governance that balances innovation with human rights protection and addresses the needs of all regions, particularly the Global South.

Overall Tone:

The discussion maintained a professional and collaborative tone throughout, though it became more animated and engaged as panelists began to challenge each other’s perspectives. Initially, the conversation was more formal with structured introductions, but it evolved into a more dynamic exchange where panelists directly responded to and sometimes disagreed with each other’s points. The tone remained respectful despite clear philosophical differences, particularly around the innovation-regulation balance and the urgency of implementing governance measures. The moderators successfully encouraged both consensus-building and healthy debate, creating an atmosphere where diverse viewpoints could be expressed and examined.

Speakers

Speakers from the provided list:

Kathleen Ziemann – Lead of AI project at German development agency GIZ (Fair Forward project), Session moderator

Guilherme Canela – Director at UNESCO in charge of digital transformation, inclusion and policies, Session co-moderator

Melinda Claybaugh – Director of privacy policy at Meta

Jovan Kurbalija – Executive director of Diplo Foundation, based in Geneva with background from Eastern Europe

Jhalak Kakkar – Executive director of the Centre for Communication Governance in New Delhi, India

Mlindi Mashologu – Deputy director general at South Africa’s Ministry of Communications and Digital Technology (filling in for the deputy minister)

Online moderator

Audience – Multiple audience members who asked questions during the session:

  • Diane Hewitt-Mills – Founder of global data protection office called Hewitt-Mills
  • Kunle Olorundare – President of Internet Society Nigerian chapter, from Nigeria, involved in advocacy
  • Pilar Rodriguez – Youth coordinator for the Internet Governance Forum in Spain
  • Anna – Representative from R3D in Mexico
  • Grace Thompson – Online participant (question relayed through online moderator)
  • Michael Nelson – Online participant (question relayed through online moderator)

Full session report

AI Governance Discussion: Stakeholder Perspectives on Inclusive and Effective Frameworks

Executive Summary

This discussion on artificial intelligence governance brought together diverse stakeholders to examine current AI regulation approaches and explore pathways towards inclusive frameworks. Moderated by Kathleen Ziemann from GIZ’s Fair Forward project and Guilherme Canela from UNESCO, the session featured representatives from Meta (Melinda Claiborne), the South African government (Mlindi Mashologu), the Centre for Communication Governance (Jhalak Kakkar), and the Diplo Foundation (Jovan Kurbalija). The discussion covered the current fragmented landscape of AI governance, debates around balancing innovation with risk management, and the importance of Global South perspectives in developing effective AI frameworks.

Current AI Governance Landscape

Fragmented Framework Development

Kathleen Ziemann opened by describing the current AI governance landscape as “blooming but fragmented,” highlighting numerous parallel initiatives including:

  • OECD AI principles
  • UNESCO’s AI ethics recommendations
  • EU AI Act
  • G7 Hiroshima AI process
  • Various national strategies emerging globally

Melinda Claiborne from Meta characterized this as “an inflection point” where many frameworks exist but fundamental questions remain about implementation and effectiveness. She noted that while governance frameworks are abundant, significant disagreement persists about what constitutes AI risks and how to measure them scientifically.

Jovan Kurbalija provided additional context, noting that AI has become commoditized with 434 large language models in China alone. This proliferation has shifted the risk landscape from concerns about a few powerful AI systems to challenges arising from widespread deployment of numerous AI models.

Panelist Perspectives

Private Sector View: Meta’s Position

Melinda Claiborne argued that the AI governance conversation may have become overweighted towards risk and safety concerns. She advocated for broadening the discussion to include opportunity and enabling innovation, asking: “Can we talk about opportunity? Can we talk about enabling innovation? Can we broaden this conversation about what we’re talking about and who we’re talking with?”

Claiborne emphasised that existing laws and frameworks already address many AI-related harms and suggested assessing their fitness for purpose rather than creating new regulatory structures. She advocated for risk assessment processes that are “objective, transparent, and auditable, similar to GDPR accountability structures.”

Civil Society Perspective: Governance from the Start

Jhalak Kakkar directly challenged the framing of innovation versus regulation as competing priorities, arguing it creates a “false sense of dichotomy.” She contended that innovation and governance must go hand in hand, emphasising that “we need to be carrying out AI impact assessments from a socio-technical perspective so that we really understand impacts on society and individuals.”

Kakkar stressed the importance of meaningful multi-stakeholder participation and strengthening mechanisms like the Internet Governance Forum (IGF) to ensure holistic input from diverse perspectives. She emphasised that transparency and explainability are crucial when bias affects decision-making systems.

Government Perspective: Context-Aware Approaches

Mlindi Mashologu from South Africa emphasised that “there is no one-size-fits-all when it comes to AI,” advocating for foundational approaches grounded in equity. He promoted “context-aware regulatory innovation” through adaptive governance models including regulatory sandboxes that enable responsible innovation while managing risks.

Mashologu highlighted South Africa’s G20 presidency work on developing a toolkit to reduce AI-related inequalities from a Global South perspective. He emphasized that AI governance must ensure technology empowers individuals rather than undermining their rights and dignity.

International Governance Perspective: Knowledge vs. Data

Jovan Kurbalija introduced a unique perspective by arguing for a fundamental shift in AI governance language from data back to knowledge. He observed that while the World Summit on the Information Society originally focused on knowledge, current frameworks have moved to focus on data instead. “AI is about knowledge,” he argued, not merely data processing.

Kurbalija also provided a nuanced view on bias, arguing against the “obsession with cleaning bias” and distinguishing between illegal biases that threaten communities and natural human biases that reflect legitimate diversity. “We should keep in mind that we are biased machines,” he noted.

Key Themes Discussed

Innovation and Risk Management Balance

The discussion revealed different perspectives on balancing innovation with risk management. While Claiborne emphasised concerns about over-regulation stifling innovation, Kakkar argued for implementing governance mechanisms from the beginning to prevent harmful path dependencies. Mashologu offered a middle ground through adaptive governance approaches like regulatory sandboxes.

Global South Inclusion and Local Relevance

Multiple panellists emphasised the importance of ensuring AI governance frameworks include meaningful Global South participation and local relevance. Mashologu highlighted regional initiatives like the African Union AI strategy, while Kakkar emphasised international coordination through existing multi-stakeholder forums.

Human Rights and Transparency

There was broad agreement on anchoring AI governance in human rights principles and ensuring transparency and explainability, particularly for systems affecting human lives. However, disagreements remained about implementation approaches, with industry preferring self-regulatory mechanisms and civil society advocating for external oversight.

Audience Engagement

Environmental and Social Justice Concerns

An audience member from R3D in Mexico challenged the panel about environmental impacts and extractivism related to AI infrastructure development, particularly regarding data center placement and resource extraction. This highlighted how AI governance discussions often overlook broader environmental and social costs that disproportionately affect Global South communities.

Practical Implementation Questions

Online questions addressed specific frameworks like the Council of Europe’s convention and practical implementation challenges. Audience members also raised concerns about bias in data collection and the need for inclusive approaches that account for multiple stakeholder perspectives.

B Corp Social Offset Proposal

One audience member proposed a B Corp social offset model for AI companies, suggesting mechanisms for corporate accountability beyond traditional regulatory approaches.

Areas of Agreement and Disagreement

Consensus Points

Panellists agreed on several fundamental principles:

  • Importance of multi-stakeholder participation
  • Need for transparency and explainability
  • Value of building upon existing legal frameworks rather than creating entirely new structures
  • Importance of human rights as foundational principles
  • Need for contextual adaptation of governance frameworks

Persistent Tensions

Key disagreements included:

  • Emphasis and timing of governance mechanisms (early implementation vs. avoiding over-regulation)
  • Adequacy of existing frameworks versus need for AI-specific mechanisms
  • Preferences for self-regulation versus external oversight
  • Approaches to addressing bias and ensuring inclusivity

Conclusions

The discussion highlighted both the complexity of AI governance challenges and the diversity of stakeholder perspectives. While panellists agreed on many fundamental principles, significant differences remained regarding implementation approaches and priorities. The conversation demonstrated the ongoing need for inclusive dialogue that brings together diverse perspectives while addressing practical governance challenges.

The session underscored the importance of ensuring Global South voices are meaningfully included in AI governance development, and that frameworks must be adaptable to local contexts while maintaining coherent overarching principles. The debate between innovation enablement and risk management continues to be a central tension requiring careful navigation as AI governance frameworks evolve.

Session transcript

Kathleen Ziemann: Welcome. Welcome to the main session on the governance of AI. My name is Kathleen Ziemann. I lead an AI project at the German development agency GIZ. The project is called Fair Forward. I will be moderating this session today together with Guilherme. Guilherme, maybe you introduce yourself as well. Hello, good morning everyone. My name is Guilherme Canelaand I’m the director in UNESCO in charge of digital transformation, inclusion and policies. A real pleasure to be here with Kathleen and this fantastic panel.

Yes, so Guilherme and I are very excited to have representatives from different regions and sectors here on the panel that will discuss AI governance with us. And dear panelists, thank you so much for coming. Let me briefly introduce you. So, to our left we have Melinda Claiborne, director of privacy policy at Meta. Welcome Melinda. And next to Melinda sits Jovan Kurbalija, executive director of Diplo Foundation, based in Geneva but with a background from Eastern Europe. And next to you, Jovan, sits Jhalak Kakkar. Welcome Jhalak, happy to have you. Jhalak Kaka is the executive director of the Centre for Communication Governance in New Delhi, India. And we are very happy also to welcome you, Mlindi Moshlogu.

You are filling in for the deputy minister from South Africa from the Ministry of Communications and Digital Technology and your title is you are the deputy director general at the ministry. Thank you all for coming and very sad that Mondly couldn’t come. He was affected by the recent activities in Israel and Iran and his flight could not come through. Well everyone, thank you for coming. Before you will be able to set the scene from your perspective, I would love to give a brief introduction on what we perceive under AI governance at the moment and also give us an idea how to discuss this further. As this IGF’s theme is building digital governance together, we want to discuss how we can shape AI governance together as we still observe different levels and possibilities of engagement in sectors and regions. I would say that currently the AI governance landscape is blooming.

Yes, we have AI governance tools like principles, processes and bodies emerging globally and I think somehow we also can lose track in that like blooming landscape, just to name a few. So in 2019, the OECD issued its AI principles followed by UNESCO’s recommendations on the ethics of AI in 2022. In 2023, I don’t know if you remember still, but AI companies such as OpenAI, Alphabet and Meta made also voluntary commitments to implement measures like watermarking AI-generated content and finally last year the EU AI Act came into force as the first legal framework for governing AI. Additionally, existing fora and groups are addressing AI and its governance. For example, last year the G7 launched the Hiroshima AI process and G20 has declared AI a key priority this year and I think we’ll be hearing more about that from you, Melinda, later. And then we have also various declarations and endorsement and significant communications issued by many like the Africa AI declaration that was signed in Kigali, for example, or the declaration on responsible AI that was signed in Hamburg recently.

And as a core document for 193 member states, the UN’s Global Digital Compact calls for concrete actions for global AI governance by establishing, for example, a global AI policy dialogue and a scientific panel on AI. So when we look at all these efforts, it seems like AI governance is not only a blooming but also a fragmented landscape with different levels and possibilities of engagement. So how do you, dear panelists, perceive this and what are your perspectives but also ideas on the current AI governance? What should be changed? What is missing? We would love to start with your perspective, Melinda, from the private sector. Feel free to use the next three to four minutes for an introduction statement and yes, there you go.

Melinda Claybaugh: Great, thank you so much and thanks for having me. It’s a pleasure to be here. Just a little perspective to set the context from where Meta sits in this conversation. So at Meta, I think everyone’s familiar with our products and our services, our social media and messaging apps. But in the AI space, we sit at two places. One, we are a developer of large language models, foundational Gen AI models. They’re called Lama and many of you might be familiar with them or familiar with applications built on top with them. So we are a developer in that sense and we focus largely on open source as the right approach to building large generative AI models.

At the same time, we build on top of models and we provide applications and systems through our products. So we’re kind of in both camps, just to situate folks. I was glad that you laid out the, I mean it really, in the last couple of years, it’s incredible the number of frameworks and commitments and principles and policy frameworks. It’s head spinning at times, having lived through that. And so I think it is really important to remember there’s no lack of governance in this space. But I do think that we are at an interesting inflection point. And I think we’re all kind of wondering, well, what now? We set down these principles, we have these frameworks, companies like Meta, for example, has put out a frontier AI framework that sets out how we assess for catastrophic risks when we’re developing our models and what steps we take to mitigate them.

And yet there’s still a lot of questions and concerns. And I think we’re at this inflection point for a few reasons. One, we don’t necessarily agree on what the risks are and whether there are risks and how we quantify them. So I think we see different regions and countries want to focus more on innovation and opportunity. Other folks want to focus more on safety and the risks. There’s also a lack of technical agreement and scientific agreement about risks and how they should be measured. I think there’s also an interesting inflection point in regulation. The EU, for example, was very fast to move to regulate AI with the landmark AI Act. And I think it’s running into some problems. I think there’s now kind of a consensus amongst key policymakers and voices in the EU that maybe we went too far and actually we don’t know whether this is really tied to the state of the science and how to actually implement this in a way. And now they’re looking to pause and reconsider certain aspects of digital regulation in Europe.

And then a lot of countries are kind of looking for what to do and are looking for solutions for how to actually adopt and implement AI. And so I don’t think I have an easy answer, but I think we are at a moment to kind of take stock and say, okay, we’ve talked about risk. Can we talk about opportunity? Can we talk about enabling innovation? Can we broaden this conversation about what we’re talking about and who we’re talking with and making sure the right voices, the right representation from little tech to big tech from all corners of the world are represented to have these conversations about governance.

Kathleen Ziemann: Thank you very much, Linda. I would love to continue with you, Mlindi, and giving us the perspective from the South African government. How do you perceive the current landscape? What is important to you at the moment?

Mlindi Mashologu: Thank you. Thank you, Kathleen, for that. I think from the South African government what we see, I think it’s a general knowledge that we see that AI is a true general purpose technology, which is the same as like electricity or Internet, but also it does affect various sectors of our economy. But also, you know, we see that, you know, with such a power, transformative power, so it comes with the responsibility which we want to ensure that, you know, the AI systems are not only effective, but also, you know, they are ethical, inclusive, and accountable.

So, I think it’s one of the first things that we want to do. But also, also to govern AI effectively, we’re trying to come with a shared vocabulary and principled, you know, foundation as reflected in some initiatives that you mentioned before, like OECD principles, UNI level. So, we are trying to make sure that we are not only focusing on AI, but also we just need to also focus on making sure that we do have required sector-specific policy interventions, you know, that are technically informed and locally relevant, so that’s what we’re trying to do, because we see that if we were to look on AI in financial services, it would be different in regulating it in AI in, say, agriculture.

So, we’re trying to come, you know, with different, you know, different approaches, but also, you know, we’re trying to make sure that we are not only focusing on AI, but also some of the areas that we are focusing on as a government, as well, is from the regional point of view, is to make sure that, you know, our approach is grounded from the principle of data justice, which also puts, you know, human rights, economic equity, as well as environmental sustainability, you know, at the center of AI, but also, we recognize that, you know, the impact of climate change, you know, on human rights, and environmental sustainability, and also, you know, to reinforce, you know, the historical inequities, so that’s one of the concrete proposals that we’re looking into, but also, the other area that we’re focusing on is the area of sufficient explainability, which is the requirement for the AI decisions, you know, it’s one of the areas that we’re advocating, especially, you know, those that impact, you know, human lives and livelihoods, so we see those areas, you know, as well as, you know, the development of human rights.

So, you know, if you were to look on the areas of, say, for instance, credit scoring, you know, predictive policing, you know, healthcare diagnostics, you know, we need to have a right to understand how these decisions have actually come, you know, and how the AI systems are trying to make those decisions, but also, further from that, one of the areas that we are following, as well, is the area of human in the loop learning, so, you know, the human learning, you know, development of AI systems, but from the design, and as well as deployment, so humans must guide, and when needed, override automated systems, so this also includes, you know, the reinforcement learning with human feedback and clear thresholds for the interventions in the high-risk domains.

I think the last point that I just want to focus on is, you know, our participation in terms of the global AI governance has been, you know, very, very important. So, you know, we have a lot of partnerships that are there, so, from our side as a country, in terms of the policy that we are currently developing, so we are looking, you know, to leverage on the areas, you know, that have already developed some frameworks, which include your African Union data policy framework, so we are building the models, you know, you know, of governance rooted in equity, so, you know, we are working with the African Union, we are working with the African Union, and we don’t want the AI to replace humans, but we want the AI to actually work with the humans in terms of, you know, assisting us in some of the most pressing needs of our society.

Kathleen Ziemann: Thank you very much, especially the local relevance of AI governance will be also discussed in our round later, so that is a very important point you made. Thank you very much. I think, you know, I think that you are very much rooted in the civil society, so I think that you are very much rooted in the civil society, so I think that you are very much rooted in the civil society, so if you could bring these two perspectives together, that would be very much appreciated.

Jhalak Kakkar: Thank you. Thank you, Kathleen. I think, you know, when we think about AI governance, one is what is the process and input into the creation of AI governance either internationally or domestically? And then, actually, what is the substance of, you know, what we are structuring AI governance as? And if I can first just take a couple of minutes to talk about the process. I think, you know, if we learn from the past, it is very important to have multi-stakeholder input as any sort of governance mechanism is being created, because different stakeholders sitting at different parts of the ecosystem are able to bring forth different perspectives, and we end up in a more multi-stakeholder environment.

I think one of the things that we have increasingly seen is a shift towards multilateralism, and I think, you know, at the IGF, it is a perfect place to talk about the need to focus on multi-stakeholderism and enabling meaningful participation, and not participation that is being done as a matter of form, but participation that actually is meaningful. So, I think, you know, one of the things that we have increasingly seen is a shift towards multilateralism, and I think, you know, at the IGF, it is a perfect place to talk about multilateralism and enabling meaningful participation, but participation that actually impacts outcomes and outputs.

I think the second piece that I want to talk about when I talk about process is increasing the need to meaningfully engage with a broader cross-section of civil society academia and researchers, so not only those bringing perspectives from the global south, but also those bringing perspectives from the global south, so not only those bringing perspectives from the global south, but also those bringing valued and informed perspectives from the global south. The way, you know, a toaster works in the United States, versus the way it works in Japan, versus the way it works in Vietnam or India, it is pretty much the same, but, you know, AI as a technology will be shaped in its, in the way it functions, in the way it impacts very differently in different contexts.

So, I think the third piece that I want to talk about is creating a process that is meaningful, that is meaningful to civil society across the global majority is very important to enable, and we can talk about maybe later in this conversation what some of the challenges are that have been preventing that currently. I think if we talk about the substance of AI governance, one is the piece around how do we really, truly democratise access to AI? We’ve seen a lot of technology development historically has been concentrated in certain regions. At a moment in time when we’re talking about the WSIS plus 20 review, I want to go back to something that was sort of articulated in the Tunis agenda, which spoke about facilitating technology transfer to bridge developmental divides.

While it’s happened, perhaps, with ICANN and ISOC, you know, supporting digital literacy and training, there’s sort of been less substantial moves to operationalisation of AI. I think in this context, it’s very important to think about how, from the get-go, we enhance capacity of countries to create local AI ecosystems so that we don’t have a concentration of infrastructure and technology in certain regions. We talk about mechanisms such as open data set platforms, you know, some kind of AI commons, and, you know, how do we facilitate that? How do we facilitate that? How do we facilitate that access to technology, and how do we facilitate that access to AI commons, and really think about how do we democratise access to this technology so that, you know, we have AI for social good which is contextually developed for different regions and different contexts? I think the last one I want to talk about is regulation and governance is not a bad word. Very often, I’m hearing conversations about, you know, we’ve talked about risk.

Let’s focus on innovation now. I think it’s creating a false sense of dichotomy. I think they have to go hand in hand. And, you know, I think in the past, the mistakes that we’ve made is sort of letting, you know, not sort of developing governance mechanisms from the get-go. And it doesn’t have to be heavy governance and regulation from the get-go, right? I think at this stage, and Melinda was talking about the fact that we don’t understand what the risks are, so we need to be implementing risks. We need to be carrying out AI impact assessments. This has to be done from a socio-technical perspective so that we really understand impacts and society impacts on individuals because until otherwise, you know, we’re going around in circles talking about we don’t know what the risks are, we don’t know what the harms are, we don’t know how it’s going to impact us.

So let’s start setting up mechanisms, whether it’s sandboxes, you know, whether it is AI impact assessments, whether it is audits. I know that, you know, we’ll go back to conversations on, but there’s a regulatory burden to this. It’s going to slow down innovation. But are there ways we can start to think about how we can operationalize these in light-touch ways so that we can parallelly start to understand what are the harms, what are the impacts that are coming up so that we don’t create part-dependencies for ourselves later on where then we’re just doing band-aid solutions.

So I think that’s a big part of what we’re trying to do. So I think it’s important to start with the approach of understanding the impact of AI on the whole. And I think it’s important to also think about the evolution of technology so it’s beneficial to our society and individuals rather than landing up in a space where it’s developed in a direction we didn’t quite sort of envisage and we didn’t realize unintended consequences that would come with shaping it in a particular way. I’ll stop here and come in with more points later. I’ll stop here and come in with more points later.

Kathleen Ziemann: Thank you. Jovan, you have a lot of practice in AI, you call yourself a master in AI. role in AI, but also on how AI is governed in Europe.

Jovan Kurbalija: Thank you, Kathleen. It’s a really great pleasure to be here today. When I was preparing cognitively for the session, I thought, I asked myself how we can make a difference. And one point which is fascinating is that in three years’ time, AI landscape has changed profoundly. Almost three years ago, when ChargePT was released, it was magical technology. It can write you a poetry, it can write you a thesis, whatever you want. And at that time, you remember the reactions were, let’s regulate it, let’s ban it, let’s control it.

There were knee-jerk reactions, let’s establish something analogous to the nuclear agency in Vienna for AI. And there were so many ideas. Fast forward, today we have a realism, and for those colleagues from Latin America, metaphor could be that AI governance is a bit of a magical realism, like Llosa, Marques, and others. You have the magic of AI, like any other technology. And I guess many of us in this room are attracted to internet and AI and digital because of this magical element. But there is a realism. And I will focus now on this realism. First point is that AI became commodity. We heard yesterday that in China there are 434, as of yesterday, large language models. I think similar statistics is for other countries worldwide.

Therefore, AI is not something which is just reserved for a few people in the lab. It’s becoming affordable commodity. It has enormous impact. One impact is that you can develop AI agent in five minutes. Exactly, our record is four minutes, 34 seconds. That’s basically unthinkable. Only a few years ago, it was a matter of years of research. There’s a first point. Therefore, the whole construct about risks is basically shifting towards this affordable commodity. Second point is that we are now on the edge where we will have basically AI on our mobile. And then the question we can ask is, today we will produce some knowledge here in our interaction.

Should that knowledge belong to us, to IGF, to our organizations, or to somebody else? Therefore, this is the second point of bottom-up AI. We will be able to codify our knowledge, to preserve our knowledge, individual or group, family, and that will profoundly shift AI governance discussions. And the third point in this context which I would like to advance in this introductory remark is that we have to change our governance language. If you read WSIS documents, both Tunis and Geneva, the key term was knowledge, not data. Data was mentioned here and there. Now, somehow, in 20 years’ time, I hope it will be reflected in WSIS Plus 20, knowledge is completely cleaned. You don’t have it in GDC, you don’t have it in the WSIS documents, you have only data. And AI is about knowledge. It’s not just about data.

That’s an interesting framing issue. In discussion, I hope that we can come to some concrete issues about, for example, sharing weights, through that sharing our knowledge, the way how we can protect our knowledge, especially from perspectives of developing countries. Because we are on the edge of the risk that that knowledge can be basically centralized and monopolized, and we had all the experiences in the early days of the Internet, where the risk that anyone can develop digital solution, Internet solution, ended at the end of the day that just a few can do it. And that wisdom should help us in developing AI governance solution, and we can discuss concrete ideas and proposals.

Kathleen Ziemann: Thank you very much, Jovan, also for the references to the whole history of the Internet. I think that’s always great to have here as expertise on the panel at IGF. Thank you all for setting the scene. I think we got already an idea about the different perspectives we have here, and also the possibilities for synergies, but maybe also for conflict. And that’s also a bit our role as moderators, to bring in these two different possibilities in you being on the panel. We would love to start now a more open round of discussion here. We have prepared questions for you. We will start, but then we will also hope that something evolves in between you, and that you can also refer to each other and answer a bit of the questions you’ve already put in the room here. But first of all, we would start with you, Mlindi, in terms of also giving us an idea. You already spoke about the local relevance of AI, how to insert that into global processes, and as South Africa is currently holding the G20 presidency, how will you make sure within your functions that the local relevance of AI and the AI frameworks that South Africa also has established will be included in the global dialogue here?

Mlindi Mashologu: Thank you, Kathleen. I think it’s important to know that AI is a priority in terms of our G20 presidency. I think it’s because we see that the reason why we put it in there, we also picked up that, I mean, how we then govern it, it will determine how we keep it inclusive, you know, and just how our societies will actually be tomorrow. So, from our approach, you know, so what we have tried to do is to ground, you know, the governance in two complementary dimensions, which one being macro foresight as well as micro precision.

So, from the macro foresight point of view, we look on AI from the long-term view, and also to recognize, you know, its impact, you know, in society for a much longer period, shaping, you know, our economy. So, but also from our G20 agenda, then we are championing the development of a toolkit, which will try to reduce the inequalities, you know, connected to the use of AI. So, this toolkit also seeks to identify the structural and systematic ways in which AI can both amplify and redress the inequality, especially from the global south. But also, we see that, you know, this foresight requires, you know, the geopolitical realism, because, I mean, AI, we see that it cannot be, you know, dominated by, you know, a handful of countries or private sector actors, but it has to be, you know, multilateral, multi-stakeholder, as well as multi-sectoral. So, that is why, then, you know, we are working on expanding, you know, the scope of the participation, you know, bringing more voices from the continent, from the global south, and also some of the underrepresented communities, in terms of the center of the AI governance dialogue. But also, if we were to look, then, matching, you know, the macro vision with the micro precision, whereby we’re looking on the ability to address granular context-specific realities.

So, as I highlighted before, that, I mean, we see that, you know, there is no one-size-fits-all when it comes to AI. So, from there on, then, we advocate, you know, for the context-aware regulatory innovation, so, which include your regulatory sandboxes, human interlock mechanism, but also adaptive policy tools that can be calibrated to center-specific risks and benefits. But also, one of the areas that we are focusing on, as well, is to ensure that we do capacity building, develop local talent, research ecosystem, as well as ethical oversight mechanisms, because we believe that the AI governance must be owned as much as, you know, for all sectors of our economy, being on the rural areas to, you know, the cities and all that.

But also, from our presidency, we also aim to bridge, you know, the governance within the regional frameworks, so we align with the African Union’s emerging AI strategy, the Nepal’s science, technology, innovation frameworks, as well as the regional policy harmonization through the SADC. So, we see that this integration at regional levels needs a peripheral, but also they are foundational in terms of, you know, the global governance agenda. I think, finally, in terms of RG20, we just would like to call our partners and international institutions to support the distributed AI governance architectures, so that, you know, we can all be inclusive and, you know, we can have the equitable, as well as, you know, make sure that the benefits of AI, you know, can have, you know, much meaningful in terms of our society, while we are also addressing, you know, the associated risk, you know, related to AI, I think.

Guilherme Canela: So, Melinda, moving to you now, actually, Jhalak stole my thunder when I was preparing the follow-up question to you, because I think she touched on a point that I’m sure several in the audience and online have thought when you were speaking about what she called the false dichotomy between innovation and protecting human rights, right? Because at the end, the objective of governance, if it’s done in alignment with the international human rights law, is to protect human rights for all, not only for the companies, right? So, how you respond to this, right?

You framed, of course, very briefly, as if it was an antagonism between those two things. At the same time, we know all companies, including yours, are investing in human rights departments, reports, and when there are specific issues, like elections, on how to deal with these technologies and the risks, for example, for elections. But yet, there is a lot of scepticism regarding the way the private sector, not only your company, is dealing with this situation. So, if you could go a bit deeper on, actually, what Jhalak was saying about what, in her view, is a false dichotomy on those two things, right?

Melinda Claybaugh: Yeah, I mean, I guess I would agree, to be provocative. In fact, I mean, I think that what I’m trying to say is that we need to look at everything together, and it seems that the debate about AI, and by AI, to be clear, I’m talking about advanced, you know, generative AI. I think we tend to talk about AI kind of loosely, but the conversations to date at the kind of international institution level and the framework and commitment level have really been about the most advanced generative AI. Those conversations have largely been focused around risk and safety risk, and I think that’s an important piece, of course, and we’ve implemented a frontier AI safety framework to address concerns about catastrophic risks. I think, however, the conversation around harm and risk, two things.

One, I think we need to be very specific about what are the harms we’re trying to avoid, and as you point out, a lot of the harms we’re trying to avoid are harms that already exist that we’ve been trying to deal with. So, people talk about misinformation, people talk about kids, people talk about all the things that are existing problems that have existing policy frameworks and solutions to varying degrees that differ in different places. What I am trying to convey is that we also need to be talking about enabling the technology, not to say ignoring risk, not to say not having that conversation, but we’re missing a key element if we’re not talking about everything together.

Because otherwise it becomes overweighted in one direction, and, you know, I don’t think there’s a global consensus around the idea that advanced generative AI is inherently dangerous and risky. I think that’s a live question that a lot of people have opinions about, but there is a lot of interest and opinions about the benefits and advances of AI, and so I think that all needs to be brought together into a conversation. I will also say that there’s existing laws and frameworks that are already in place, and so I think even the pre-date chat GPT, right, and so we have laws around the harms that people are talking about around copyright and data use and misinformation and safety and all of that. We have legal frameworks for it, so I would like to see attention around how are those legal frameworks fit for purpose or not with the new technology, rather than seeking to regulate the technology.

Kathleen Ziemann: Thank you, that’s a very interesting aspect that Jhalak was also touching upon a bit, especially on that idea whether we can use the already existing laws and frameworks in the context of this new technology. Jhalak, how do you perceive this? Do we have all the rules already, and if not, what is missing?

Jhalak Kakkar: Yeah, I think, you know, there’s been a lot of conversation around whether there is existing regulation that can apply to AI, and whether there’s need for more regulation, and I think there are several existing pieces of legislation that would be relevant in the context of AI, just to name a few. Data protection, competition antitrust law, platform governance laws in different countries, consumer protection laws, criminal laws. So, yes, I think I also agree with Melinda’s point that we need to think about how maybe some of these laws are fit for purpose.

Do they need to be reinterpreted, reimagined, amended to account for a different context that AI brings in? I mean, if I can give an example, the way we’ve seen the need for traditional antitrust competition law to evolve in the context of digital markets, you know, when internet platforms came in, you could have said we have existing competition law, we have existing antitrust law, and that’s going to apply, and we have seen over the last couple of decades that it is not fit for purpose to deal with, you know, the new realities of network effects, data advantage, zero price services, multiple sided markets, that have come in with the advent of internet platforms.

Similarly, we already see a hot debate happening around intellectual copyright laws, whether copyright law is well-positioned to deal with the unique sort of situation that has arisen where, you know, companies are training their LLMs on, you know, a lot of knowledge and data available on the internet, relying on the fair use exception. What was the intention of the fair use exception under copyright law? It was that big publishers should not amass a lot of knowledge with them, and it gives people like you and me access to use that knowledge and reference that knowledge and build on that knowledge. But, you know, it’s an interesting situation where you have large companies now sort of, you know, leveraging fair use.

So I think, you know, we already have courts around the world, you know, dealing with this issue. I’m sure legislatures are going to deal with it, and it’s a question that I think as a society we have to think about is, yes, you know, there is, you know, development and new things that these companies are doing, you know, fundamentally maybe there’s a transformation that they’re doing when they’re building on this, but you know, what are we losing out? What are the advantages in weighing all of that to think through? I think coming back to, you know, the false dichotomy point, I want to go back to that. I think, yes, we know a lot of harms that have already arisen in the digital and internet platform context. We’re well aware of that, and we’re looking out for that. Civil society academia researchers as, you know, we see AI, and if we’re talking more specifically about LLMs.

But those are existing harms that we’re looking for. There are a lot of harms that we don’t know may exist, and just to give an example, I don’t think 15 years back we thought about the kind of harm social media platforms would have on children. It just wasn’t something that was envisaged. I mean, maybe someone could have envisaged, you know, see some content, but, you know, the mental health impacts, the kind of cyber bullying, the extent and nature of it, a lot of this is unintended and unenvisaged, and I think unless we are scrutinizing these systems, and it’s not only a question of catastrophic risk, I think the, you know, we have to think about individual level impacts and societal level impacts, and unless we’re engaging with these systems and understanding these systems from the get-go, those impacts and implications and negative consequences will only surface five to ten years from now, and we cannot only rely on, you know, while it’s wonderful to see companies heavily investing in human rights teams, trust and safety teams, you know, as a space we didn’t have trust and safety ten years back, so it’s a new space that has grown.

You have so many professionals coming into this space with specialized skill sets, and it’s great to see that, but we’ve also seen that companies have never been particularly adept at only working under the realm of self-regulation. I mean, whether, and this is across industries, I’m not only pointing to tech, you know, we’ve seen that time and time again over the last 150 years of, you know, really, you know, when we talk about industrial regulation that’s been coming through, so I think we we have to move beyond the sense that companies will self-regulate. Very often they don’t disclose harms that are apparent to them and we need external regulators, we need communities to be engaging, a bottom-up approach, civil society to be engaging, multilateral institutions to be coming in. We need the development of guidance, guidelines to operationalize the AI principles that we’ve all been talking about and working on over the last five, seven, eight years. So I think we have to move forward into the next phase of AI governance.

Guilherme Canela: Thank you, very interesting. So now what’s going to happen, I will do a follow-up question to Jovan, but then we are going to open to you. So if you want to start queuing on the available mics, you are welcome to do it. Jovan, let’s go back to the magical realism and the issue of getting back knowledge to the discussion. It’s a very interesting point you raised. You probably remember when there was the Tunis round of the World Summit, UNESCO published a very groundbreaking report called Towards Knowledge Societies. It’s very interesting, until today, every week that report is one of the most downloaded reports in the UNESCO online library, which shows that independently of what we are discussing here in these very limited circles, the people overall are still very much interested in the knowledge component of this conversation.

So doing this preamble to ask you to go a bit deeper, so how we bring knowledge back to this conversation, of course connecting with the new topics, of course data is a relevant issue, we can’t ignore the discussion of data governance, but I mean the South African presidency has three main topics, correct me if I’m wrong, meaning the solidarity, equality and sustainability. And if you read that report of UNESCO of 20 years ago, connecting with the challenges of the then Information Society, you’ll see those three keywords appearing maybe in different ways. So people like Manuel Castells and Nestor Garcia-Conclini were telling those things. So what is your view on how we get back to this important part of the conversation when we are looking to the AI governance frameworks?

Jovan Kurbalija: Sure, it’s good that you brought this, by the way, excellent report. Two reports are excellent, the UNESCO and World Bank report on the digital dividends, those are landmark. What worried me was, I studied and I didn’t want to bring it, but you told that you don’t mind controversies, even UNESCO, which set the knowledge stage with that excellent report, backpedaled on the knowledge in the subsequent years, which was part of the overall policy fashion. Data is, even in the ethical framework, the recommendation data is more present.

That’s the first point. The second point, why people download it, they react intuitively, they can understand knowledge. Data is a bit abstract, knowledge is what we are now exchanging, creating, developing. And my point is that common sense element is extremely important, and through that, through bottom-up AI, through, let’s say, preserving knowledge of today’s discussion, may be excellent questions that we’ll have. This is knowledge that was generated by us at this moment, and this is also, back to Marcus and other magical realism, you have to grasp the moment. And it’s not, it’s technically possible, it’s financially affordable, and it’s ethically desirable, if you want this trinity.

But let me just, on that, on your question, just reflect on two points of discussion. There are many false dichotomies, including in the question of knowledge. I can list, you have multilateral versus multi-stakeholder, privacy versus security, freedom versus public interest. And we can label them as false dichotomies, but I think we should make a step forward. Ideally, we should have both, multi-stakeholder, multi-lateral security, and, but sometimes you have to make trade-offs. And this is a critical, that trade-offs are done in transparent way, that you can say, okay, in this case, I’m going to multilateral solution, because governments have respective roles and responsibilities. You can find in many other fields.

And back to your question in this, bringing discussion to common sense, and the references that colleagues made. I would go, not only 150 years, or even the Napoleonic code, I would go to Hammurabi 3,400 years ago. There is a provision in Hammurabi’s law, if you build a house, and if house collapses, the builder of the house should be punished with that sentence. It was a bit, that was the time. What we are Harsh one. Harsh one. We don’t want that. What we are missing today, let me give you one, just end with this, I will conclude. Deployees has its own AI. We are reporting from this session. And let’s hypothetical situation, our AI system confuses and says that two of you said, or all of us, something which you didn’t say.

And you go back to Paris, and your boss say, hey, by the way, did you say really that? And said, no, I didn’t say it, but Diplo reported. Now, who is responsible for it, ethically, politically, legally? I’m responsible. I’m director of Diplo. Nobody forced me to establish AI system, to develop AI system. Therefore, we are losing a common sense, which exists since Hammurabi, Napoleonic code, till today. Somebody who develop AI system, and make it available, should be responsible for that. There are nuances now in that, but the core principles are common sense principles. Therefore, in that sense, people, by downloading knowledge, they’re reacting with common sense. I think in governance, AI governance, you should really get back to the common sense, and being in position to explain to five years old what is AI governance. And it’s possible. And I would say this is a major challenge for all of us in this room, and I will say policy community to make AI governance common sense, bottom-up, and explainable to anyone who is using AI.

Kathleen Ziemann: Thank you very much. I don’t see a queue behind the mics yet, and I think we also have someone. That is great. Welcome. Happy to have your questions now towards the panel. It would be great if you could say who you are, from which institution, and also to whom you would like to direct your question.

Audience: Thank you. So, my name is Diane Hewitt-Mills, and I’m the founder of a global data protection office called Hewitt-Mills. For those that don’t know, under the GDPR, there are certain organizations are mandated to appoint a data protection officer, and that’s an individual or an organization that has responsibility for independently and objectively reviewing the compliance of the organization when it comes to data protection, cybersecurity, and increasingly AI. So, I’m a UK qualified barrister. I’ve been working in the area of governance for over 25 years, data protection focused and privacy focused governance, and so I’ve been running this organization for seven years, which I’m you know very proud to do as a sort of female founder. I know I’m a very rare beast, but importantly, I decided five years ago to go for this standard called a B Corp standard, and I don’t know if you’re aware, but B Corp is a standard for organizations that can demonstrate high standards in environmental social governance, ESG, and so my sort of comment or recommendation is we oversee carbon sort of offsets and the sort of efforts of organization in terms of demonstrating ESG, and I had a thought about would it be an idea if organizations could also demonstrate their social offset. So, for example, if you are a tech business or health business using AI, would it be sort of an idea that you document the existing risks, think about foreseeable risks, and think about actually how you could offset those risks in an objective way and to have an independent overseer of that type of activity. I just thought I’d throw that out there to the panelists, because we’re thinking about creative ideas of making sort of AI governance tangible and explainable, and I wondered for example if that were the sort of requirement 15 years ago for social media platforms to demonstrate their social offset, what sort of world we might be in today.

Kathleen Ziemann: Thank you very much. So, I think it was not specifically directed to someone on the panel, so whoever wants to take that question, I’m looking at you, Melinda, but I think it might be relevant for others as well.

Melinda Claybaugh: Yeah, I’m happy to take a start at it. I think that that exact kind, I mean what you’re talking about is really a risk assessment process that is objective and transparent and auditable in some fashion. I think that is the, you’re right, the basis of kind of the GDPR and accountability structure that so many data protection laws have been built on. I think increasingly we see it in the content regulation space, particularly in Europe as well, that there are risk assessments and mitigations and transparency measures that can be assessed by external parties. And interestingly we are seeing that in some early AI regulation attempts. I speak most fluently about what’s going on in the US, but we are seeing very similar structures around documenting, identifying, documenting risks, demonstrating how you’re mitigating them, and then in some fashion making that viewable to some set of external parties. I do think that is a proven and durable type of governance mechanism that makes a lot of sense. I think we still come to the issue, however, of what are the risks that are, and how are they assessed. And I say that because it is a particularly thorny challenge in the AI, particularly in the AI safety space, and you know there’s healthy debates around kind of what risks are tolerable or not. But I do think as a framework that that makes a lot of sense, and there are a lot of professionals who already work in that way, and companies already have those internal mechanisms and structures. So I would be surprised if we didn’t land in a place like, and in fact that’s what the EU AI Act essentially proposes as a structure.

Guilherme Canela: Sorry, just a quick follow-up question, but in that case even if there is no consensus about what are the risks, the transparency that you were saying that you also agree is part of the solution, right? The companies don’t need to be forced in agreeing on the risks, but they need to be transparent in telling the stakeholders what are the issues they consider risks and how they are mitigating, right? Because the problem may be to say this is a risk you need to report on that, but when the requirement is report on how you do risk assessments, then it’s a different ball game, right?

Melinda Claybaugh: Yeah, I think the trick, I’m thinking about this through the lens of an open source model provider, and this is another tricky area of AI governance and regulation. How you govern closed models and open models may be very different, and so we provide, we do all kinds of testing and risk assessment and mitigation of our model, and then we release it for people to build on and add their own data to and build it for their own applications. We don’t know how people are going to use it. We don’t know how they end up using it. We can’t see that, and so there’s a very, we can’t predict how the model will be used. So I think there’s just nuances as we think about this in terms of who’s responsible for what. I do think some of it’s common sense, who’s using it, you know, but so I think that’s part of the value chain issue that people talk about.

Kathleen Ziemann: I see that also, Mlindi wanted to react to the question.

Mlindi Mashologu: I think for me the important thing is, that is why for us as policymakers, we just want everybody to play fair when it comes to AI. You know, there are areas that we understand that I mean self-regulation will be there from the organizations, but all we, it’s important is to make sure that at least we can look on all these risks that are emanating and make sure that we all deal with those risks collectively, both from the private sector as well as from the government, because from us as government, we don’t want to be seen as, you know, doing that hard regulation regulation and all that, which might end up, you know, starting innovation, but we want to make sure that, you know, everybody can be protected, but while also from the private sector point of view, you can also derive the value that you want to derive from the AI systems.

I think that’s what is important, but also the other area that I’ve touched on before, the area of explainability, it’s actually very important because, I mean, you actually use these models and, I mean, they might have, you know, some decisions that can be very harmful to human lives, so that’s why then we say that you need to have, you know, these decisions being explainable, but then it also touches to say that whenever the model does a decision, it needs to have considered the broad aspects of data sets, you know, from various demographics as well to make sure that you don’t look on a few demographics and say that, okay, no, the model can actually take a decision based on, you know, the small amount of data that you actually train the model on.

Kathleen Ziemann: Yes, definitely, and that’s also, I think, a big achievement of the open source community to really stress that factor of explainability, what is happening actually within the data and within the models. Yes, we would love to move further on with on-site questions and we switch to this microphone. Yes, happy to hear your question. Thank you very much.

Audience: So, my name is Kunle Olorundare. I’m from Nigeria and I’m the president of Internet Society, the Nigerian chapter, so to say, and we are into advocacy and stuff like that, so to say. So, my concern is this. I know it’s just about the right time for us to start discussing AI governance. There’s no gains in that. However, there are issues that we really need to start to look at critically and one of those issues has to do with the way data is being collected. I listened to Jovan the other time when he was emphasizing the issue of knowledge. I agree 100%, right, because the end product of artificial intelligence is knowledge, so to say.

However, how we gather this data, I think it’s very important. What I’m saying that is because we are looking at an AI that is going to, you know, be inclusive, that we be able to have value for every community, so to say, and you will agree with me that this data gathering is being done by experts and for every individual person, right, everybody has its own bias.

So, I believe that whatever data you gather is as inherently flawed as the bias of the person that has gathered the data in the first place. So, we need to start looking at how we are going to bring inclusivity in how we bring all this data together, considering all the multistakeholders. I think that is very important. That is on one hand. And for me, I think it will get to a stage that even this AI we are talking about is going to become DPG, data public routes. I’m saying that because it’s going to be available to everybody and everybody should be able to use it for whatever purpose they want to use it for.

But before we get there, how do we ensure that we put everybody on the same pedestal in the sense that we need to have a framework that is universal? I’ve listened to Melinda when she was talking about the framework and I began to see some kind of, okay, different frameworks coming from, you know, different stakeholders. So, we need to sit down and bring all these, you know, frameworks together so that we can have a universal framework that’s going to speak to issues that bother everybody and the AI we’re going to have at the end of the day is going to be universal and it’s going to be able to take care of everybody’s concern. So, I want the panelists to react to this. I think Jovan and probably Melinda should be able to react to this. Thank you very much.

Kathleen Ziemann: Thank you very much. Jovan, do you want to go first?

Jovan Kurbalija: Just a quick, excellent point and question. Two comments. Both are controversial, but first one is more controversial. We have had a lot of discussion of cleaning biases and I’m not speaking about illegal biases, biases which are basically insulting people’s dignity. That’s clear. That should be dealt even by law, but let alone that. But we should keep in mind that we are bias machines. I am biased. My culture, my age, my, I don’t know, hormones, whatever, are defining what I’m saying now or what questions you ask.

Therefore, this obsession, which is now calming down, but it existed, let’s say, one or two years ago with cleaning bias, was very dangerous. Yes, illegal biases, biases that threaten communities, definitely. But I would say at that point we have to bring a more common sense again into this. Second point that you mentioned is about knowledge. Knowledge, like a bias, should have attribution. Financial, legal, this, the question you ask, is your knowledge built on your understanding and other things? The problem currently in the debate is that we are throwing our knowledge into some pot where we don’t know.

It’s like, I call it AI Bermuda Triangle, and it’s disappearing and suddenly we are revisiting it. even testing big systems in our lab in deep layer testing where we put very specific knowledge, contextual, and we realize that it is taken, repackaged, and then not sold yet to us but maybe in the future. That’s a critical issue. Your knowledge, knowledge of local community in Africa, Ubuntu, oral knowledge, written knowledge, belongs to somebody or should be attributed, shared with the universal framework, definitely, but attributed. That’s a critical issue when it comes to knowledge and also to your previous question what we should do with the knowledge.

And again, instruments are there and the risk is that confusing AI governance discussion, well everything and anything, magical realism a bit, is basically missing the core points and it is like baby crying. Instead of answering the question with existing tools, we are giving the toys to the baby, which is discussion on ethics, philosophy, which I love, I love philosophy, but there are some issues that we can solve with existing instruments related to your question, question of bias, and question of knowledge.

Kathleen Ziemann: Melinda, before you react as well, I look at Jhalak’s face and I see that you might not agree with all of the points mentioned by Jovan, especially possibly the one that bias and data can be neglected. Is that something you’re thinking about?

Jhalak Kakkar: I mean, I don’t disagree with him actually. I think there is a reality that there is a level of bias in all of us and it’s not that the world is completely unbiased, it’s not that when judges make decisions there’s no bias over there, but so and ultimately AI is trained on data from this world and biases will get embedded into that. They’re trained on existing data sets which capture societal bias. I think the difference is with human decision-making in many contexts, we have set up processes and systems and there has to be disclosure of the thinking and reasoning going into it and that can be whetted if someone raises an objection. I think with AI systems that’s the challenge, is explainability has in many contexts of various kinds of AI systems has been challenging to establish and I think that’s a question that is still being grappled with. So I think that, you know, and I think disclosure of use of AI systems in various contexts and whether someone knows that an AI system is being used and they are being subject to it and then the kind of bias that comes into decision-making that impacts them, I think that’s the other piece to it.

Kathleen Ziemann: Thank you. Melinda?

Melinda Claybaugh: Just two quick thoughts. I think it is critical that AI works for everyone and so part of that is making sure that we do have the data, that there is a way of either training a model on the data or fine-tuning model on as representative of data as possible. I think that’s a foundational key concept. I also think that there needs to be a lot of education around AI outputs and so when people are interacting with AI, they understand that what they’re getting back may not be the truth, right?

Like what is it? It’s actually just a prediction about the next right word and so I think we’re at the very early stages of this in society and so our expectations of what it is, what it should be, what these outputs should be relied on for, I think is very evolving. I do agree that when AI is being used to make decisions about people or their eligibility for services or jobs that there is an extra level of concern and caution and requirements that should be added in terms of a human in the loop or transparency around a decision was made. I absolutely understand the concerns around that so I think as a society we get more experience and understand these tools more and what they should be used for and what they should not be used for. I think these questions will get more sorted out.

Kathleen Ziemann: Thank you very much. So at IGF we want to be as inclusive as possible that’s why we also have the online participation for people that can’t be here and that can’t be also maybe afford to travel here and we have our online moderator Pedro behind the mic here. Pedro, if you could give us like two relevant questions from the online space that need to be addressed to the panel that would be really great.

Online moderator: Perfect, thanks. We have a question from Grace Thompson directed to Jhalak and then Melinda. What are the panelists views about the consensus gathered in the concept of Europe framework convention on AI, human rights, democracy and the rule of law. The first international treaty and legally binding document to safeguard people in the development and oversight of AI systems. We at the Center for AI and Digital Policy advocate for endorsement of this international AI treaty which has 42 signatories to date including non-European states.

Kathleen Ziemann: It’s not really going through Pedro, we have difficulties to understand you I think as well. Can you give us maybe the two main words that need to be discussed. Was it the EU AI Act in the first one?

Online moderator: The concept of Europe framework convention on AI, human rights, democracy and the rule of law. The comments on the panel for Jhalak and Melinda.

Kathleen Ziemann: Okay, thank you very much. I think that went through okayish. Jhalak, do you want to react?

Jhalak Kakkar: Are they asking about the framework? Yes. So I think you know there has been a lot of conversation globally around what is the right approach to take. You know Melinda was saying we need to think about what systems need you know more scrutiny versus others, systems that are impacting individuals and people directly versus those that are not.

There’s been a whole conversation which we’ve referenced earlier in this in this dialogue around you know innovation versus regulation and what is the right level of you know regulation to come in with this point. What is too heavy? What is not enough? And I think I don’t have the answer to that right. I think in different contexts it’s going to be different. In countries which have a high regulatory capacity and context there is more that they can do and implement. In countries that don’t we have to frame regulation and laws which work for those sort of regulatory and policy contexts.

But what I think is really important is you know at occasions like for instance the India AI Impact Summit which is an opportunity because India is you know trying to emerge as a leader in the global majority to really bring together thinking from civil society, academia, researchers, industry, governments particularly from the global majority to talk about what what would be the right way forward. Would it be borrowing from ideas that have developed in another context and perhaps there are ideas that are relevant to pick up from there.

But what is contextually and locally useful and relevant from within the contexts we come from right. I mean you know places like India and South Africa may have a lot of AI that is being developed say a Sloan-Kettering health diagnostic tool which is then brought in and deployed in the Indian context. But demographics are different. You know the kind of testing and treatments available at our primary health care, secondary health and tertiary health care settings are different.

You know so there are a lot of differences. So how do we think about something like that which may not be really a topic of discussion in in other parts of the world. So I think in India and places like South Africa we may have slightly different challenges to grapple with and I think it’s very important that those conversations happen as well.

Mlindi Mashologu: Yeah, I think from the South African point of view as my colleague has just highlighted here, you know one of the areas is the areas of human rights and it’s enshrined in the Constitution. So whatever you do from the technology point of view you need to make sure that it doesn’t really impact the human rights you know as well as the Bill of Rights. So it’s one of the things that we’re trying to do to make sure that whenever then you actually put these types of technologies they are not infringing on the rights of people.

But also you’ll find that you know you do have you know some of the other laws that we’ve got like your protection of personal information act. So which says that you know you can’t just use my information we didn’t need. But then how do we then make sure that we can use your information for the public good. So now you’re competing with these two laws, one is trying to use this information for the better good but one is saying that you can’t just use my information.

So I think it’s going to be quite a balancing act that we’re trying to do to say that one of the – some of the things that we can use to make sure that we can drive innovation but what are the things that we need to do to make sure that we don’t also infringe on the human rights as well as, you know, the information of the people.

Kathleen Ziemann: Yes. Thank you very much. I see there’s further questions from the floor. Jovan, you will be reacting briefly because I think it would be also great.

Jovan Kurbalija: There are two questions concrete on the UAE Act and Council of Europe Convention. Just quickly, those are very interesting points. You moved fast and probably too far. Now as we’re hearing from Brussels there is a bit of revisiting of some provision, especially on the defining high-risk models through FLOPs and other things. Council of Europe is an interesting organisation.

They adopted Convention on AI but they’re an interesting organisation because under one roof, first you have the Convention but you have also human rights coverage, next is also human rights court, you have cybercrime, Council of Europe Convention is host of Budapest Convention. You have science. Therefore, it’s one of the rare organisations that interplay between existing silos when it comes to AI could be basically bridged within one organisation. Those are just two points on UAE Act and the Council of Europe Convention.

Kathleen Ziemann: Thank you very much. Let’s pull last two questions from the floor. I see two people standing behind the mic over there.

Audience: Yes, thank you. My name is Pilar Rodriguez, I’m the youth coordinator for the Internet Governance Forum in Spain. I wanted to follow up a little on what Ms Jhalak was saying about how countries can achieve AI governance and AI sovereignty if this doesn’t lead to, let’s say, an AI fragmentation. I’m not just thinking from a regulatory perspective because we have the AI Act in Europe, the California AI regulation, China has this regulation, so doesn’t that lead to more fragmentation and coming from the youth perspective, is there a way to ensure that we have, let’s say, a global minimum so that future generations can be, let’s say, protected?

Kathleen Ziemann: Thank you very much. Let’s also take the next question of the person behind you.

Audience: Hi, I’m Anna from R3D in Mexico. It’s going to sound like I’m making a comment more than a question, but I promise that for Melinda there’s going to be a question because I was very concerned of hearing how there was this underestimation about the risks of AI, making it sound like it’s something hypothetical and not that it was actually materialized in several examples around the world. And Jovan was mentioning this topic of knowledge and education while at the same time speaking about illegal biases when I think that in reality there has been several examples of how classism, racism, misogynist is affecting how people can access basic services around the world or how police are predicting who is a suspect or not, so we shouldn’t disinform people about the actual risks.

But the question to Melinda would be related to the emergency crisis that we are living and since she mentioned that companies such as META are doing these risk assessments, I wonder how META is planning to self-regulate, for example, when it hasn’t done environmental or human rights assessments, when it has established hyperscale data centers in places like the Netherlands that had made people publicly pressured for them to stop being constructed there, so then you move them to global south countries or to Spain in that case, so that all the issues with extractivism, with hydric crisis, with pollution arrive to other communities where there hasn’t been any consultancy but you are claiming that there has. That would be my question.

Kathleen Ziemann: Thank you very much. So two relevant points. One would be the point of fragmentation and the other one of global AI justice, basically. Melinda, do you want to react first?

Melinda Claybaugh: Sure. I mean, I can’t really speak to the data center piece more, I think that was your question around kind of basically the energy needs for AI and where data centers are placed. Essentially, I can’t really speak to that. I can say that I think we all know that the AI future is going to require a lot of energy and I think that there are a lot of questions about where the energy needs are and where the solutions to those energy needs are going to come from, but I can’t speak in any detail about how particular decisions are made.

Kathleen Ziemann: And in terms of fragmentation, that was part of the first question, right, the fragmentation of AI governance, having so many initiatives, so many different stakeholders. That is also basically then the question, how could you, coming from different sectors and regions, cooperate more in that area? Is there an idea here on the panel how that could look like? Who would like to react on that?

Jovan Kurbalija: It was a question, how to avoid?

Kathleen Ziemann: How different sectors and regions could cooperate even better on AI governance? How to counteract against that fragmentation that might also occur from the blooming landscape of AI governance?

Jovan Kurbalija: We have to define what is fragmentation, you know. Having the AI adjusted to Indian, South African, Norwegian context, German, Swiss, whatever, is basically fine. Probably the communication or exchanges should be done by some sort of standardisation about the weights. Weights are basically the key element into AI systems. Therefore we may think about some sort of standards and to avoid the situation that we have with social media. If you are on one platform, you cannot migrate with your network to the other. Now the Digital Services Act in the UAE is trying to mitigate that. But the same thing may relate to the AI. If my knowledge is qualified by one company and I want to move to the other platform, company, whatever, there are no tools to do that. My advice would be to be very specific and to focus on the standards for the weights and then to see how we can share the weights and in that context how we can share the knowledge.

Kathleen Ziemann: So joint standardisation, Mlindi?

Mlindi Mashologu: I think from the continent we started this governance as early as 2020-2021 when we developed the AI blueprint for the continent and I think from there on the African Union also went into developing the AI strategy but also the individual member countries are also developing their policies and strategies. So I think there is not much fragmentation but it’s such that from the grassroots level you’ll find that each country will have its particular priorities that they would like to focus on. But I think generally and if you were to look in all the published policies and strategies and legislations you’ll find that AI normally generally addresses more of the core principles, the issues of ethics, the issues of bias, the issues of risk and I think even from the South African point of view, from the policy development point of view, the one that we are currently finalising now, we are actually advancing some of those aspects as well. I think we are looking on this thing but from the grassroots level, from the country level, you’ll find that you’re not going to have exactly the same but it’s not that they are always the same, but it’s such that there are different priorities as well.

Kathleen Ziemann: Thank you.

Guilherme Canela: Do you want to add anything?

Jhalak Kakkar: Yeah. Yeah. I think there is a concern that in the drive for innovation there’s a race to the bottom in terms of the adherence to responsible AI, ethical AI, rights frameworks. We have several existing documents ranging from the UDHR to the ICCPR which can be interpreted, which through international organisations, norm building can happen, which sets a certain baseline. I think the IGF, as WSIS plus 20 review happens, I think the IGF should be strengthened to really help with not only agenda setting of the action. lines, but also as a feedback loop into CSTD, the WSIS forum, and other mechanisms where there is holistic sort of input from multi-stakeholders going into these processes, which accounts for many of the concerns that have been raised, you know, ranging from environmental concerns to, you know, impact of extraction in global majority contexts.

It could be questions of labor for AI, you know, whether it’s labeling and, you know, a lot of, like, worker-related concerns. So I think all of this needs to be surfaced, and, you know, then these conversations need to feed back into the agenda setting as well as, you know, the final outcomes that we have. Because I think that level of international coordination, both at the multilateral level but at the multi-stakeholder level, is important. And we have to all sort of come together and work together to find sort of ways to set this common baseline so that we don’t sort of, in the race for, you know, getting ahead, we don’t sort of lose focus on our common values that we have articulated in documents like the UDHR.

Guilherme Canela: Thank you. So now we are walking towards the end, so if the online moderator has a very straightforward question.

Online moderator: Yes, we have one from Michael Nelson about the two sectors. The two sectors that are spending the most money on AI are the finance sector and the military. And we know very little about their successes and failures, so he would like from the panelists, especially from Jovan and Melinda, what are their fears and hopes about those two sectors?

Jovan Kurbalija: The question is about AI?

Online moderator: Especially finance sector and the military.

Guilherme Canela: Okay, so fears and hopes, finance, military, but then I will give the floor back to all of you. One minute to, if you want to comment on that, and within this one minute, what is your key takeaway of this session? So let’s start with you, Melinda.

Melinda Claybaugh: Okay, I don’t have an answer on the finance and military sector hopes and fears, to be honest. We are very focused on adding AI personalisation to our family of apps and products. I’ll leave the finance and military to others. On the key takeaway from the session, I think it really is interesting to take stock of where we are at these meetings. I’ve been at the last couple IGFs and I think the pace of discussion and the developments in the space are really fast, right, and fast-moving. And so I’m encouraged, and I would encourage us all to keep having these conversations. I think multi-stakeholder will be the word that everyone is going to say here, but it really is a unique important role that the IGF plays in bringing people together. I know we have a lot of Meta colleagues here. We take everything we hear here back home and talk to people and inform our own direction. And so I think let’s keep having these conversations. I think the convening power is the most important contribution right now in the space of bringing these particular voices together.

Kathleen Ziemann: Thank you. Jovan?

Jovan Kurbalija: On military and AI, it’s obviously getting unfortunately central stage with the conflicts, especially Ukraine and Gaza, together with the question of use of drones. There are discussions in the UN with the laws, little autonomous weapons or robot killers, and then the Secretary General has been very, very vocal for the last five years to ban killer robots, which basically is about AI. What is my take from it? Awareness building, education. At Diplo, we run AI apprenticeship program, which is explaining AI by developing AI. People learn about AI by developing their own AI agents. And I would say let’s demystify AI, but still enjoy in its magic.

Kathleen Ziemann: Thank you. Jhalak?

Jhalak Kakkar: Yeah, I think, you know, my sort of final thoughts would be, I think we need to learn from the past, the successes of the past. Things like, you know, the multi-stakeholder model, successes we’ve seen in international cooperation. But we also need to learn from the past in terms of mistakes that have been made around governance, around technology, and not sort of repeat those. And I think we need to continue to work together to build, you know, a robust and wholesome, impactful, beneficial digital ecosystem.

Kathleen Ziemann: Thank you. Mlindi?

Mlindi Mashologu: I think from my side, I just want to say that, you know, AI need to be anchored in the human rights. We need to make sure that the technology empower the individuals. But also, when it comes to innovation, we need to do that responsibly by looking on the adaptive governance models, which includes like your regulated sandboxes. But I think the last point that I want to touch on is the issue of collaboration, you know, aligning, you know, the national regional as well as the global efforts in terms of ensuring that, you know, the AI benefits, you know, are spread across, you know, everybody in our society. I think those are my final thoughts.

Guilherme Canela: Thank you very much. So, now I have the very difficult task to try to summarize it, which would be impossible. But just the disclaimer, whatever I’m going to say now is full responsibility of Guilherme Canella’s chat. It’s not any of you, right? But I think there is an interesting element in this conversation, that when many years ago I was involved in some of these similar debates, AI governance, etc., the first thing that appeared was bias. And bias appeared very late in our panel, which is a good sign, because the first things that appeared were the processes.

Even if we disagree, right? The dichotomy, the eventually false dichotomy between innovation or risks, but all those keywords that we spoke, risks, innovation, public goods, data governance, the knowledge, bringing knowledge back, those are actually more structured frameworks that look into the real but this very specific issues of bias or disinformation of conspiracy theories and so on. So, I think this is a good sign for all of us, even if we disagree, as you noticed, that we are looking into something that we can take to the next level of conversation from a governance point of view.

Because when we are too much concentrated in the specific pieces of content rather than the processes, then the conversation becomes very difficult, because it’s related to polarizations, to specific opinions, which everyone has the right to have on what is false, what’s not false, what is dangerous, on what is not danger, while when we are concentrating on transparency and accountability on public goods, etc., all those keywords, they come with lots of interesting knowledge behind them on how we transform them in concrete governance, which doesn’t mean only governmental governance, can be self-regulation, can be co-regulation, and so on. But we also, for obvious reasons, for lack of time, left important things out of the conversation that also need to be part of governance frameworks. For example, the issue of energy consumption of these machines should be part of governance frameworks, and it appeared very late today because of the time and so on.

But I do think that the panel did a good job in putting also some of the divergences of this conversation, which is part of the game. Last thing I want to say is that, and this is not on the shoulders of the panelists or my co-moderator, I do invite you to think that being innovative is to leave no one behind in this conversation. When Eleanor Roosevelt was holding the Universal Declaration of Human Rights on that famous photo, the conversation there was, this was the real innovation, how we came together and put those 33 articles in a groundbreaking way that is not solved until today. So what we really require is an innovation that includes everyone and not only the 1%. Thank you very much. Thank you, my co-moderator. It was a pleasure.

K

Kathleen Ziemann

Speech speed

142 words per minute

Speech length

1666 words

Speech time

701 seconds

AI governance is blooming but fragmented with different levels of engagement across sectors and regions

Explanation

Ziemann describes the current AI governance landscape as having numerous emerging tools, principles, processes and bodies globally, but notes this creates a fragmented environment with varying levels of participation across different sectors and regions. She emphasizes that while there are many initiatives, there are different possibilities for engagement.

Evidence

Examples include OECD AI principles (2019), UNESCO recommendations (2022), voluntary commitments by AI companies, EU AI Act, G7 Hiroshima AI process, G20 declarations, Africa AI declaration in Kigali, and UN Global Digital Compact

Major discussion point

Current State of AI Governance Landscape

Topics

Legal and regulatory | Development

M

Melinda Claybaugh

Speech speed

151 words per minute

Speech length

1982 words

Speech time

783 seconds

We are at an inflection point with many frameworks but questions remain about implementation and effectiveness

Explanation

Claybaugh argues that while there’s no lack of governance frameworks and principles in the AI space, there are still many questions and concerns about their practical implementation. She suggests we need to take stock of what has been established and consider broadening the conversation beyond just risk to include opportunity and innovation.

Evidence

Meta has put out a frontier AI framework for assessing catastrophic risks, but there’s still disagreement on what risks are and how to quantify them. EU AI Act is facing implementation problems with policymakers reconsidering certain aspects

Major discussion point

Current State of AI Governance Landscape

Topics

Legal and regulatory | Economic

False dichotomy between innovation and risk management – both must go hand in hand

Explanation

Claybaugh argues that the conversation has been overweighted toward risk and safety concerns, and suggests we need to talk about enabling technology and innovation alongside risk management. She emphasizes the need to broaden the conversation to include the right voices and representation from different stakeholders.

Evidence

Different regions and countries focus more on innovation and opportunity while others focus on safety and risks. There’s lack of technical and scientific agreement about risks and measurement

Major discussion point

Innovation vs. Risk Management Balance

Topics

Economic | Legal and regulatory

Agreed with

Agreed on

False dichotomy between innovation and risk management – both must go hand in hand

Disagreed with

Disagreed on

Innovation vs. Risk Management Balance – False Dichotomy Debate

Existing laws and frameworks already address many AI-related harms, need to assess fitness for purpose

Explanation

Claybaugh contends that there are already legal frameworks in place that pre-date ChatGPT covering issues like copyright, data use, misinformation, and safety. She suggests focusing on whether these existing frameworks are fit for purpose with new technology rather than creating new regulation specifically for the technology itself.

Evidence

Laws around harms people discuss regarding copyright, data use, misinformation and safety already exist and pre-date ChatGPT

Major discussion point

Regulatory Approaches and Implementation

Topics

Legal and regulatory | Human rights

Disagreed with

Disagreed on

Existing Legal Frameworks vs. New AI-Specific Regulation

Risk assessment processes should be objective, transparent, and auditable similar to GDPR accountability structures

Explanation

Claybaugh supports the idea of objective risk assessment processes that can be viewed by external parties, similar to GDPR’s accountability structure. She sees this as a proven and durable governance mechanism that makes sense for AI, though notes challenges around defining and assessing risks.

Evidence

GDPR accountability structure and similar approaches in content regulation space in Europe with risk assessments, mitigations and transparency measures

Major discussion point

Regulatory Approaches and Implementation

Topics

Legal and regulatory | Human rights

Agreed with

Agreed on

Importance of transparency and explainability in AI systems

Different governance approaches needed for open source vs. closed AI models

Explanation

Claybaugh explains that governance challenges differ between open and closed models, particularly regarding responsibility and oversight. With open source models, companies can test and assess risks before release, but cannot predict or control how others will use the models after release.

Evidence

Meta provides open source models where they do testing and risk assessment before release, but people build on them with their own data for applications that Meta cannot see or predict

Major discussion point

Regulatory Approaches and Implementation

Topics

Legal and regulatory | Economic

AI must work for everyone requiring representative training data and fine-tuning

Explanation

Claybaugh emphasizes that for AI to be effective for all users, it’s critical to train models on data that is as representative as possible or fine-tune models appropriately. She also stresses the need for education about AI outputs so people understand the limitations and nature of AI responses.

Evidence

AI outputs are predictions about the next right word, not necessarily truth, and society is in early stages of understanding what AI should be relied on for

Major discussion point

Data Bias and Inclusivity

Topics

Development | Human rights

Convening power of IGF is crucial for bringing diverse voices together in AI governance discussions

Explanation

Claybaugh highlights the unique and important role that IGF plays in bringing different stakeholders together for AI governance conversations. She notes that Meta takes insights from these discussions back to inform their own direction and emphasizes the value of continued multi-stakeholder dialogue.

Evidence

Meta has colleagues attending IGF sessions and they take learnings back home to inform company direction

Major discussion point

Multi-stakeholder Participation and Process

Topics

Legal and regulatory | Development

Agreed with

Agreed on

Need for meaningful multi-stakeholder participation in AI governance

M

Mlindi Mashologu

Speech speed

171 words per minute

Speech length

2193 words

Speech time

766 seconds

Need for sector-specific policy interventions that are technically informed and locally relevant

Explanation

Mashologu argues that AI governance cannot be one-size-fits-all and requires different approaches for different sectors. He emphasizes that regulating AI in financial services would be different from regulating it in agriculture, requiring sector-specific interventions that are both technically informed and locally relevant.

Evidence

AI in financial services requires different regulation than AI in agriculture

Major discussion point

Current State of AI Governance Landscape

Topics

Legal and regulatory | Development

Agreed with

Agreed on

AI governance must be contextually relevant and locally adapted

Importance of bringing voices from the global south and underrepresented communities to governance dialogues

Explanation

Mashologu emphasizes South Africa’s G20 presidency focus on expanding participation in AI governance discussions by bringing more voices from the African continent, global south, and underrepresented communities to the center of AI governance dialogue. He argues this is essential for multilateral, multi-stakeholder, and multi-sectoral approaches.

Evidence

South Africa’s G20 presidency is working on expanding scope of participation and developing a toolkit to reduce inequalities connected to AI use

Major discussion point

Multi-stakeholder Participation and Process

Topics

Development | Legal and regulatory

Agreed with

Agreed on

Need for meaningful multi-stakeholder participation in AI governance

Should focus on adaptive governance models including regulatory sandboxes for responsible innovation

Explanation

Mashologu advocates for context-aware regulatory innovation that includes regulatory sandboxes, human interlock mechanisms, and adaptive policy tools that can be calibrated to center-specific risks and benefits. He emphasizes the need for responsible innovation while ensuring AI empowers individuals.

Evidence

Regulatory sandboxes, human interlock mechanisms, and adaptive policy tools that can be calibrated to specific contexts

Major discussion point

Innovation vs. Risk Management Balance

Topics

Legal and regulatory | Economic

Need for sufficient explainability in AI decisions that impact human lives and livelihoods

Explanation

Mashologu argues for requirements that AI decisions, especially those impacting human lives and livelihoods, must be sufficiently explainable. He emphasizes the right to understand how AI systems make decisions in critical areas and the need for broad demographic representation in training data.

Evidence

Examples include credit scoring, predictive policing, and healthcare diagnostics where people need to understand how AI decisions are made

Major discussion point

Human Rights and Ethical Considerations

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

Importance of transparency and explainability in AI systems

Human-in-the-loop mechanisms essential for high-risk domains with clear intervention thresholds

Explanation

Mashologu advocates for human-in-the-loop learning in AI system development from design through deployment, where humans must guide and when needed override automated systems. This includes reinforcement learning with human feedback and clear thresholds for interventions in high-risk domains.

Evidence

Reinforcement learning with human feedback and clear thresholds for interventions in high-risk domains

Major discussion point

Human Rights and Ethical Considerations

Topics

Human rights | Legal and regulatory

AI governance must be anchored in human rights and ensure technology empowers individuals

Explanation

Mashologu emphasizes that AI governance must be grounded in human rights principles as enshrined in South Africa’s Constitution and Bill of Rights. He stresses that whatever technology is implemented should not infringe on people’s rights while still enabling innovation and public good applications.

Evidence

South African Constitution and Bill of Rights, Protection of Personal Information Act creates tension between using information for public good and protecting individual information

Major discussion point

Human Rights and Ethical Considerations

Topics

Human rights | Legal and regulatory

AI governance should be grounded in data justice principles with focus on economic equity and environmental sustainability

Explanation

Mashologu argues that South Africa’s regional approach to AI governance is grounded in data justice principles that put human rights, economic equity, and environmental sustainability at the center of AI development. He recognizes the impact of climate change on human rights and the need to address historical inequities.

Evidence

Recognition of climate change impacts on human rights and environmental sustainability, addressing historical inequities

Major discussion point

Environmental and Social Justice

Topics

Human rights | Development | Sustainable development

Regional frameworks like African Union AI strategy should align with global governance efforts

Explanation

Mashologu explains that South Africa’s AI governance approach leverages existing regional frameworks including the African Union data policy framework and emerging AI strategy, NEPAD’s science and technology frameworks, and regional policy harmonization through SADC. He sees regional integration as foundational to global governance agendas.

Evidence

African Union data policy framework, NEPAD science and technology innovation frameworks, SADC regional policy harmonization

Major discussion point

Global Cooperation and Standardization

Topics

Development | Legal and regulatory

G20 presidency focuses on developing toolkit to reduce AI-related inequalities from global south perspective

Explanation

Mashologu describes South Africa’s G20 presidency championing the development of a toolkit to reduce inequalities connected to AI use, particularly from a global south perspective. The toolkit seeks to identify structural and systematic ways AI can both amplify and redress inequality.

Evidence

G20 agenda includes developing toolkit to identify structural and systematic ways AI can amplify and redress inequality, especially from global south perspective

Major discussion point

Global Cooperation and Standardization

Topics

Development | Economic

J

Jhalak Kakkar

Speech speed

168 words per minute

Speech length

2889 words

Speech time

1031 seconds

Need for meaningful multi-stakeholder input in AI governance creation, not just participation as a matter of form

Explanation

Kakkar emphasizes the importance of multi-stakeholder input in creating AI governance mechanisms, but stresses that participation must be meaningful and actually impact outcomes and outputs, not just be done as a formality. She argues that different stakeholders bring different perspectives that lead to better governance outcomes.

Evidence

Different stakeholders sitting at different parts of the ecosystem bring forth different perspectives

Major discussion point

Multi-stakeholder Participation and Process

Topics

Legal and regulatory | Development

Agreed with

Agreed on

Need for meaningful multi-stakeholder participation in AI governance

False dichotomy between innovation and risk management – both must go hand in hand

Explanation

Kakkar argues against creating a false sense of dichotomy between focusing on risks versus innovation, contending that both must go hand in hand. She warns against the mistakes of not developing governance mechanisms from the beginning and emphasizes that regulation and governance are not bad words.

Evidence

Past mistakes of letting technology develop without governance mechanisms from the beginning, leading to band-aid solutions later

Major discussion point

Innovation vs. Risk Management Balance

Topics

Legal and regulatory | Economic

Agreed with

Agreed on

False dichotomy between innovation and risk management – both must go hand in hand

Disagreed with

Disagreed on

Innovation vs. Risk Management Balance – False Dichotomy Debate

Need for AI impact assessments and audits to understand societal impacts from the beginning

Explanation

Kakkar advocates for implementing AI impact assessments from a socio-technical perspective to understand impacts on society and individuals. She suggests mechanisms like sandboxes and audits can be implemented in light-touch ways to avoid creating path dependencies that require band-aid solutions later.

Evidence

Need to understand harms and impacts rather than going in circles about not knowing what risks are

Major discussion point

Regulatory Approaches and Implementation

Topics

Legal and regulatory | Human rights

Disagreed with

Disagreed on

Existing Legal Frameworks vs. New AI-Specific Regulation

Multi-stakeholder model should be strengthened through IGF and other mechanisms for holistic input

Explanation

Kakkar argues that the IGF should be strengthened as part of the WSIS plus 20 review to help with agenda setting and serve as a feedback loop into CSTD, WSIS forum, and other mechanisms. She emphasizes the need for holistic multi-stakeholder input that addresses various concerns from environmental to labor issues.

Evidence

Need to address environmental concerns, impact of extraction in global majority contexts, labor for AI including labeling and worker-related concerns

Major discussion point

Multi-stakeholder Participation and Process

Topics

Development | Legal and regulatory

International coordination needed to set common baseline while respecting local contexts

Explanation

Kakkar emphasizes the need for international coordination both at multilateral and multi-stakeholder levels to establish a common baseline based on shared values like those in the Universal Declaration of Human Rights. She warns against a race to the bottom in responsible AI adherence while competing for innovation leadership.

Evidence

Existing documents ranging from UDHR to ICCPR can be interpreted through international organizations for norm building

Major discussion point

Global Cooperation and Standardization

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

AI governance must be contextually relevant and locally adapted

Transparency and explainability crucial when bias affects decision-making systems

Explanation

Kakkar acknowledges that bias exists in all human decision-making but argues that AI systems present unique challenges because explainability has been difficult to establish in many AI contexts. She emphasizes the importance of disclosure when AI systems are used and people are subject to biased decision-making that impacts them.

Evidence

Human decision-making has processes and systems with disclosure of thinking and reasoning that can be challenged, but AI systems lack this explainability

Major discussion point

Data Bias and Inclusivity

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

Importance of transparency and explainability in AI systems

Disagreed with

Disagreed on

Approach to AI Bias Management

J

Jovan Kurbalija

Speech speed

147 words per minute

Speech length

2050 words

Speech time

836 seconds

AI has become a commodity with 434 large language models in China alone, shifting the risk landscape

Explanation

Kurbalija argues that AI has transformed from magical technology to affordable commodity in just three years since ChatGPT’s release. He notes that AI development has become accessible, with the ability to develop AI agents in under five minutes, fundamentally shifting discussions about risks and governance from exclusive lab research to widespread accessibility.

Evidence

434 large language models in China as of the session date, ability to develop AI agent in 4 minutes 34 seconds compared to years of research previously required

Major discussion point

Current State of AI Governance Landscape

Topics

Economic | Legal and regulatory

AI is about knowledge, not just data – need to shift governance language back to knowledge

Explanation

Kurbalija argues that AI governance discussions have shifted away from knowledge to focus primarily on data, but AI is fundamentally about knowledge creation and preservation. He points out that WSIS documents originally emphasized knowledge, but this has been cleaned out of recent documents like the Global Digital Compact in favor of data-centric language.

Evidence

WSIS documents from Geneva and Tunis emphasized knowledge as key term, but 20 years later knowledge is absent from GDC and current WSIS documents which only mention data

Major discussion point

Knowledge vs. Data Framework

Topics

Legal and regulatory | Development

Disagreed with

Disagreed on

Knowledge vs. Data Framework Priority

Knowledge should have attribution and belong to communities rather than disappearing into AI systems

Explanation

Kurbalija argues that knowledge, including local community knowledge like Ubuntu and oral traditions, should be attributed and shared rather than disappearing into what he calls an ‘AI Bermuda Triangle.’ He emphasizes that knowledge belongs to someone and should be attributed even when shared through universal frameworks.

Evidence

Testing in their lab shows specific contextual knowledge being taken, repackaged, and potentially sold back. Examples include Ubuntu, oral knowledge, and written knowledge from local communities in Africa

Major discussion point

Knowledge vs. Data Framework

Topics

Human rights | Development

Disagreed with

Disagreed on

Knowledge vs. Data Framework Priority

Risk of knowledge centralization and monopolization similar to early internet development

Explanation

Kurbalija warns that there’s a risk of knowledge being centralized and monopolized in AI systems, similar to what happened in the early days of the Internet where the promise that anyone could develop digital solutions ended up with only a few being able to do it. He suggests this historical wisdom should inform AI governance solutions.

Evidence

Early Internet experience where initial promise of universal access to development ended with concentration among few players

Major discussion point

Knowledge vs. Data Framework

Topics

Economic | Development

AI responsibility should follow the eternal legal principle since Hammurabi’s code that developers are responsible for their products and activities.

Explanation

Kurbalija argues that AI governance should return to common-sense principles that have existed since ancient times, citing Hammurabi’s law about builders being responsible for house collapses. He contends that whoever develops and deploys AI systems should be responsible for their outcomes, and that AI governance should be explainable to a five-year-old.

Evidence

Hammurabi’s law from 3,400 years ago about builder responsibility, the Napoleonic code, and a hypothetical example of Jovan’s responsibility for Diplo’s AI system making false reports about session participants.

Major discussion point

Innovation vs. Risk Management Balance

Topics

Legal and regulatory | Human rights

Need for joint standardisation, particularly around AI weights sharing to avoid platform lock-in

Explanation

Kurbalija suggests that to avoid fragmentation while allowing local AI adaptation, there should be standardisation around AI weights sharing. He warns against repeating social media platform problems where users cannot migrate their networks between platforms, advocating for standards that allow knowledge portability between AI systems.

Evidence

Current social media platform lock-in, where users cannot migrate networks, EU Digital Services Act trying to address this issue

Major discussion point

Global Cooperation and Standardisation

Topics

Legal and regulatory | Economic

Agreed with

Agreed on

AI governance must be contextually relevant and locally adapted

Need to distinguish between illegal biases and natural human biases while maintaining common sense

Explanation

Kurbalija argues that while illegal biases that insult human dignity must be addressed with urgency, there has been a dangerous obsession with cleaning all biases from AI systems. He contends that humans are naturally ‘biased machines’ influenced by culture, age, and many other identity aspects, and that cleaning biases in AI systems could be, at least, impossible and, at worst, dangerous.

Evidence

Personal example of each of us being influenced by our culture, age and many other specificities.

Major discussion point

Data Bias and Inclusivity

Topics

Human rights | Legal and regulatory

Disagreed with

Disagreed on

Approach to AI Bias Management

G

Guilherme Canela

Speech speed

145 words per minute

Speech length

1156 words

Speech time

477 seconds

Innovation should mean leaving no one behind in the conversation

Explanation

Canela argues that true innovation in AI governance should be inclusive and ensure that everyone is part of the conversation, not just the 1%. He draws a parallel to Eleanor Roosevelt and the Universal Declaration of Human Rights as an example of groundbreaking innovation that brought people together in an inclusive way.

Evidence

Eleanor Roosevelt holding the Universal Declaration of Human Rights as example of real innovation that was groundbreaking and inclusive with 33 articles

Major discussion point

Human Rights and Ethical Considerations

Topics

Human rights | Development

A

Audience

Speech speed

140 words per minute

Speech length

1152 words

Speech time

493 seconds

Data gathering inherently contains bias from experts collecting it, need inclusive approaches

Explanation

An audience member from Nigeria argues that data gathering for AI is inherently flawed because it’s done by experts who each have their own biases, making the resulting AI systems biased from the start. They emphasize the need for inclusive approaches that bring all stakeholders into the data gathering process to ensure AI serves all communities.

Evidence

Every individual person has their own bias, so whatever data is gathered is as inherently flawed as the bias of the person gathering it

Major discussion point

Data Bias and Inclusivity

Topics

Development | Human rights

Social offset mechanisms could help organizations demonstrate responsibility for AI risks

Explanation

An audience member suggests that organizations using AI could document existing and foreseeable risks and demonstrate how they offset those risks in an objective way with independent oversight. They propose this as a creative way to make AI governance tangible and explainable, drawing parallels to carbon offset mechanisms.

Evidence

B Corp standard for environmental social governance (ESG), carbon offset mechanisms, hypothetical example of social media platforms demonstrating social offset 15 years ago

Major discussion point

Environmental and Social Justice

Topics

Legal and regulatory | Human rights

Need to address environmental impacts and extractivism related to AI infrastructure development

Explanation

An audience member from Mexico challenges the underestimation of AI risks and specifically questions how companies like Meta plan to self-regulate when they haven’t conducted environmental or human rights assessments for hyperscale data centers. They point to examples of data centers being moved from places like the Netherlands to Global South countries without proper consultation.

Evidence

Hyperscale data centers in Netherlands facing public pressure, then moved to Global South countries or Spain, issues with extractivism, hydric crisis, and pollution affecting communities without consultation

Major discussion point

Environmental and Social Justice

Topics

Development | Human rights | Sustainable development

O

Online moderator

Speech speed

145 words per minute

Speech length

178 words

Speech time

73 seconds

Questions about the Council of Europe framework convention on AI as the first international legally binding treaty

Explanation

The online moderator relayed a question from Grace Thompson about panelists’ views on the Council of Europe framework convention on AI, human rights, democracy and the rule of law. This represents the first international treaty and legally binding document to safeguard people in AI system development and oversight.

Evidence

42 signatories to date including non-European states, advocacy from Center for AI and Digital Policy for endorsement

Major discussion point

Global Cooperation and Standardization

Topics

Legal and regulatory | Human rights

Finance and military sectors are biggest AI spenders but lack transparency about successes and failures

Explanation

The online moderator conveyed Michael Nelson’s observation that finance and military sectors spend the most money on AI development but there is very little public knowledge about their successes and failures. This raises concerns about transparency and accountability in these critical sectors.

Evidence

Finance sector and military are the two sectors spending the most money on AI

Major discussion point

Current State of AI Governance Landscape

Topics

Economic | Legal and regulatory

Agreements

Agreement points

False dichotomy between innovation and risk management – both must go hand in hand

False dichotomy between innovation and risk management – both must go hand in hand

False dichotomy between innovation and risk management – both must go hand in hand

Both speakers agree that creating a division between focusing on innovation versus managing risks is counterproductive. They argue that both aspects must be addressed simultaneously rather than treating them as opposing priorities.

Legal and regulatory | Economic

Need for meaningful multi-stakeholder participation in AI governance

Convening power of IGF is crucial for bringing diverse voices together in AI governance discussions

Need for meaningful multi-stakeholder input in AI governance creation, not just participation as a matter of form

Importance of bringing voices from the global south and underrepresented communities to governance dialogues

All three speakers emphasize the critical importance of inclusive, meaningful participation from diverse stakeholders in AI governance processes, with particular attention to ensuring voices from the Global South and underrepresented communities are heard.

Legal and regulatory | Development

Importance of transparency and explainability in AI systems

Risk assessment processes should be objective, transparent, and auditable similar to GDPR accountability structures

Transparency and explainability crucial when bias affects decision-making systems

Need for sufficient explainability in AI decisions that impact human lives and livelihoods

Speakers agree that AI systems, particularly those affecting human lives and decision-making, must be transparent and explainable, with objective and auditable processes for risk assessment and accountability.

Legal and regulatory | Human rights

AI governance must be contextually relevant and locally adapted

Need for sector-specific policy interventions that are technically informed and locally relevant

International coordination needed to set common baseline while respecting local contexts

Need for joint standardization, particularly around AI weights sharing to avoid platform lock-in

Speakers agree that while some standardization and coordination is needed, AI governance must be adapted to local contexts, sectors, and specific needs rather than applying one-size-fits-all solutions.

Legal and regulatory | Development

Similar viewpoints

Both speakers advocate for proactive governance mechanisms that can assess and address AI impacts from the early stages of development, using adaptive approaches like regulatory sandboxes to enable responsible innovation while managing risks.

Need for AI impact assessments and audits to understand societal impacts from the beginning

Should focus on adaptive governance models including regulatory sandboxes for responsible innovation

Legal and regulatory | Economic

Both speakers draw lessons from internet governance history, warning against concentration of power and emphasizing the need for distributed, multi-stakeholder approaches to prevent repeating past mistakes of centralization.

Risk of knowledge centralization and monopolization similar to early internet development

Multi-stakeholder model should be strengthened through IGF and other mechanisms for holistic input

Economic | Development

Both speakers emphasize that human rights principles should be the foundation of AI governance, with international coordination needed to establish common baselines while allowing for local adaptation and ensuring technology serves to empower rather than harm individuals.

AI governance must be anchored in human rights and ensure technology empowers individuals

International coordination needed to set common baseline while respecting local contexts

Human rights | Legal and regulatory

Unexpected consensus

Existing legal frameworks may be sufficient with adaptation rather than new AI-specific regulation

Existing laws and frameworks already address many AI-related harms, need to assess fitness for purpose

Transparency and explainability crucial when bias affects decision-making systems

Despite representing different sectors (private sector vs. civil society), both speakers acknowledge that many existing legal frameworks may be applicable to AI governance challenges, though they may need adaptation. This consensus is unexpected given typical tensions between industry and civil society on regulatory approaches.

Legal and regulatory | Human rights

Acknowledgment of natural human bias while focusing on harmful biases

Need to distinguish between illegal biases and natural human biases while maintaining common sense

Transparency and explainability crucial when bias affects decision-making systems

Both speakers, despite different backgrounds, agree that not all bias is problematic and that efforts should focus on addressing harmful or illegal biases rather than attempting to eliminate all bias. This nuanced view is unexpected in AI governance discussions that often call for complete bias elimination.

Human rights | Legal and regulatory

Common sense and historical legal principles should guide AI governance

Common sense principles from historical legal frameworks like Hammurabi’s code should guide AI responsibility

Risk assessment processes should be objective, transparent, and auditable similar to GDPR accountability structures

Unexpectedly, both the academic/diplomatic representative and the private sector representative agree that AI governance should build on established legal principles and common sense approaches rather than creating entirely new frameworks. This suggests convergence on evolutionary rather than revolutionary regulatory approaches.

Legal and regulatory | Human rights

Overall assessment

Summary

The speakers demonstrated significant consensus on key principles including the need for multi-stakeholder participation, transparency and explainability, contextual adaptation of governance frameworks, and the integration of innovation with risk management. There was also agreement on building upon existing legal frameworks rather than creating entirely new regulatory structures.

Consensus level

High level of consensus on fundamental principles with constructive disagreement on implementation details. This suggests a mature discussion where stakeholders from different sectors (private, public, civil society, and academic) have moved beyond basic positions to focus on practical governance solutions. The implications are positive for AI governance development, as this level of agreement on core principles provides a strong foundation for collaborative policy development while allowing for contextual adaptation and sector-specific approaches.

Differences

Different viewpoints

Innovation vs. Risk Management Balance – False Dichotomy Debate

False dichotomy between innovation and risk management – both must go hand in hand

False dichotomy between innovation and risk management – both must go hand in hand

While both speakers agree it’s a false dichotomy, Claybaugh argues the conversation has been overweighted toward risk and safety concerns and suggests broadening to include opportunity and innovation. Kakkar counters that regulation and governance are not bad words and warns against repeating past mistakes of not developing governance mechanisms from the beginning.

Legal and regulatory | Economic

Existing Legal Frameworks vs. New AI-Specific Regulation

Existing laws and frameworks already address many AI-related harms, need to assess fitness for purpose

Need for AI impact assessments and audits to understand societal impacts from the beginning

Claybaugh advocates for using existing legal frameworks that pre-date ChatGPT and assessing their fitness for purpose rather than creating new AI-specific regulation. Kakkar argues for implementing new AI impact assessments and audits from the beginning to understand societal impacts that may not be covered by existing frameworks.

Legal and regulatory | Human rights

Approach to AI Bias Management

Need to distinguish between illegal biases and natural human biases while maintaining common sense

Transparency and explainability are crucial when bias affects decision-making systems

Kurbalija argues against the ‘obsession’ with cleaning all biases from AI systems, distinguishing between illegal biases that should be addressed and natural human biases that are inevitable. Kakkar emphasizes the unique challenges AI systems present regarding explainability and the need for transparency when biased decision-making impacts people.

Human rights | Legal and regulatory

Knowledge vs. Data Framework Priority

AI is about knowledge, not just data – need to shift governance language back to knowledge

Knowledge should have attribution and belong to communities rather than disappearing into AI systems

Kurbalija uniquely emphasizes shifting AI governance discussions from data-centric to knowledge-centric language, arguing that knowledge should have attribution and belong to communities. Other panelists focus more on data governance, bias, and regulatory frameworks without specifically addressing this knowledge vs. data distinction.

Legal and regulatory | Development

Unexpected differences

Self-regulation vs. External Oversight Effectiveness

Risk assessment processes should be objective, transparent, and auditable, similar to GDPR accountability structures

Need for AI impact assessments and audits to understand societal impacts from the beginning

Need to address environmental impacts and extractivism related to AI infrastructure development

Unexpected tension emerged when Claybaugh discussed Meta’s self-regulatory efforts while the audience member from Mexico directly challenged Meta’s track record on environmental and human rights assessments for data centers. This created an unexpected confrontation about corporate accountability that Claybaugh couldn’t fully address.

Development | Human rights | Sustainable development

Urgency of AI Governance Implementation

We are at an inflection point with many frameworks but questions remain about implementation and effectiveness

False dichotomy between innovation and risk management – both must go hand in hand

While both speakers acknowledged the current state of AI governance, an unexpected disagreement emerged about timing and urgency. Claybaugh suggested taking stock and potentially slowing down (referencing the EU’s reconsideration), while Kakkar emphasised the urgency of implementing governance mechanisms immediately to avoid path dependencies.

Legal and regulatory | Economic

Overall assessment

Summary

The main areas of disagreement centered around the balance between innovation and regulation, the adequacy of existing legal frameworks versus need for new AI-specific governance, approaches to bias management, and the priority of knowledge versus data frameworks in AI governance discussions.

Disagreement level

Moderate disagreement with significant implications. While speakers shared common goals of inclusive, responsible AI governance, their different approaches could lead to fragmented implementation strategies. The disagreements reflect broader tensions in the AI governance community between industry self-regulation and external oversight, between leveraging existing frameworks and creating new ones, and between global standardization and local adaptation. These disagreements are constructive and represent legitimate different perspectives rather than fundamental conflicts, but they highlight the complexity of achieving coordinated AI governance across different stakeholders and regions.

Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for proactive governance mechanisms that can assess and address AI impacts from the early stages of development, using adaptive approaches like regulatory sandboxes to enable responsible innovation while managing risks.

Need for AI impact assessments and audits to understand societal impacts from the beginning

Should focus on adaptive governance models including regulatory sandboxes for responsible innovation

Legal and regulatory | Economic

Both speakers draw lessons from internet governance history, warning against concentration of power and emphasizing the need for distributed, multi-stakeholder approaches to prevent repeating past mistakes of centralization.

Risk of knowledge centralization and monopolization similar to early internet development

Multi-stakeholder model should be strengthened through IGF and other mechanisms for holistic input

Economic | Development

Both speakers emphasize that human rights principles should be the foundation of AI governance, with international coordination needed to establish common baselines while allowing for local adaptation and ensuring technology serves to empower rather than harm individuals.

AI governance must be anchored in human rights and ensure technology empowers individuals

International coordination needed to set common baseline while respecting local contexts

Human rights | Legal and regulatory

Takeaways

Key takeaways

AI governance is at a critical inflection point with numerous frameworks established but implementation challenges remaining

Multi-stakeholder participation must be meaningful and inclusive, particularly bringing voices from the global south and underrepresented communities

The innovation vs. risk management debate represents a false dichotomy – both elements must be addressed simultaneously through adaptive governance models

AI governance should shift focus from data to knowledge, with proper attribution and community ownership of knowledge being essential

Human rights must anchor all AI governance efforts, with explainability and human-in-the-loop mechanisms required for high-risk applications

Existing legal frameworks can address many AI-related harms but need assessment for fitness-for-purpose in the AI context

Global cooperation requires standardization (particularly around AI weights sharing) while respecting local contexts and priorities

Bias in AI systems is inevitable but must be distinguished between natural human bias and illegal/harmful bias, with transparency being key

Environmental and social justice considerations, including extractivism and energy consumption, must be integrated into AI governance frameworks

The IGF’s convening power is crucial for bringing diverse stakeholders together to advance AI governance discussions

Resolutions and action items

South Africa’s G20 presidency will develop a toolkit to reduce AI-related inequalities from a global south perspective

Continue strengthening the IGF as a feedback mechanism into CSTD, WSIS forum, and other multilateral processes

Implement AI impact assessments and audits to understand societal impacts from early stages of development

Develop regulatory sandboxes and adaptive policy tools for context-specific AI governance

Focus on joint standardization efforts, particularly around AI weights sharing standards

Align national AI policies with regional frameworks like the African Union AI strategy

Establish human-in-the-loop mechanisms with clear intervention thresholds for high-risk AI domains

Unresolved issues

How to achieve global consensus on what constitutes AI risks and how to measure them scientifically

How to balance innovation incentives with regulatory requirements without creating a race to the bottom

How to ensure meaningful participation from global majority countries in AI governance processes given resource constraints

How to address the environmental impact and energy consumption of AI systems in governance frameworks

How to handle responsibility and liability in open source AI models where usage cannot be predicted or controlled

How to prevent knowledge centralization and monopolization while enabling AI development

How to create universal frameworks while respecting local contexts and priorities

How to address the concentration of AI development in certain regions and democratize access to AI technology

How to implement effective transparency and explainability requirements for complex AI systems

Suggested compromises

Use risk assessment frameworks similar to GDPR that are objective, transparent, and auditable rather than prescriptive technology regulation

Implement light-touch regulatory mechanisms like sandboxes and impact assessments to understand harms without stifling innovation

Focus on sector-specific governance approaches rather than one-size-fits-all AI regulation

Distinguish between different types of AI systems (open source vs. closed, high-risk vs. low-risk) for differentiated governance approaches

Build on existing legal frameworks and assess their fitness for purpose rather than creating entirely new regulatory structures

Establish common baseline standards through international cooperation while allowing for local adaptation and priorities

Balance self-regulation by companies with external oversight and multi-stakeholder input

Address both macro-level foresight and micro-level precision in AI governance through complementary approaches

Thought provoking comments

We don’t necessarily agree on what the risks are and whether there are risks and how we quantify them… Can we talk about opportunity? Can we talk about enabling innovation? Can we broaden this conversation about what we’re talking about and who we’re talking with?

Speaker

Melinda Claybaugh

Reason

This comment challenged the dominant risk-focused narrative in AI governance discussions and introduced the concept of a false dichotomy between innovation and safety. It was provocative because it suggested the AI governance community might be overemphasizing risks at the expense of opportunities.

Impact

This comment became a central theme throughout the discussion, with Jhalak directly addressing it as a ‘false dichotomy’ and arguing that innovation and governance must go hand in hand. It shifted the conversation from purely technical governance issues to fundamental questions about how we frame AI development.

We have to change our governance language. If you read WSIS documents, both Tunis and Geneva, the key term was knowledge, not data… Now, somehow, in 20 years’ time, knowledge is completely cleaned. You don’t have it in GDC, you don’t have it in the WSIS documents, you have only data. And AI is about knowledge.

Speaker

Jovan Kurbalija

Reason

This observation was intellectually provocative because it identified a fundamental shift in how we conceptualize information governance – from knowledge (which implies human understanding and context) to data (which is more technical and abstract). It connected current AI debates to broader historical patterns in digital governance.

Impact

This comment introduced a new analytical framework that influenced subsequent discussions about attribution, ownership, and the democratization of AI. It led to deeper conversations about who owns knowledge embedded in AI systems and how to preserve local and contextual knowledge.

Very often, I’m hearing conversations about, you know, we’ve talked about risk. Let’s focus on innovation now. I think it’s creating a false sense of dichotomy. I think they have to go hand in hand… We need to be carrying out AI impact assessments from a socio-technical perspective so that we really understand impacts on society and individuals.

Speaker

Jhalak Kakkar

Reason

This comment directly challenged Melinda’s framing and provided a sophisticated counter-argument that governance and innovation are complementary rather than competing priorities. It introduced the concept of socio-technical impact assessments as a practical solution.

Impact

This response elevated the discussion from a simple either/or debate to a more nuanced conversation about how to implement governance mechanisms that support rather than hinder innovation. It led to practical discussions about sandboxes, audits, and light-touch regulatory approaches.

We advocate for context-aware regulatory innovation… There is no one-size-fits-all when it comes to AI. We need peripheral foundational approaches that are grounded in equity and don’t want AI to replace humans, but we want AI to work with humans.

Speaker

Mlindi Mashologu

Reason

This comment introduced the crucial concept of ‘context-aware regulatory innovation’ and emphasized the Global South perspective on AI governance. It challenged universalist approaches to AI governance while maintaining focus on equity and human-centered development.

Impact

This perspective influenced the entire panel’s discussion about local relevance versus global coordination, leading to deeper conversations about how to avoid AI governance fragmentation while respecting local contexts and priorities.

We should keep in mind that we are bias machines. I am biased. My culture, my age, my hormones, whatever, are defining what I’m saying now… This obsession with cleaning bias was very dangerous. Yes, illegal biases, biases that threaten communities, definitely. But I would say we have to bring more common sense into this.

Speaker

Jovan Kurbalija

Reason

This was a controversial and thought-provoking comment that challenged the prevailing orthodoxy about bias elimination in AI systems. It introduced nuance by distinguishing between harmful biases and natural human perspectives, advocating for a more realistic approach to bias in AI.

Impact

This comment sparked immediate reactions from other panelists and audience members, leading to a more sophisticated discussion about what types of bias are problematic versus natural, and how to handle bias in AI systems without losing valuable diversity of perspectives.

How do we really, truly democratize access to AI? We need to enhance capacity of countries to create local AI ecosystems so that we don’t have a concentration of infrastructure and technology in certain regions… How do we facilitate access to technology and create AI commons?

Speaker

Jhalak Kakkar

Reason

This comment shifted the focus from governance frameworks to fundamental questions of global equity and access. It connected AI governance to broader development and justice issues, introducing concepts like ‘AI commons’ and technology transfer.

Impact

This perspective influenced the discussion toward more structural questions about global AI inequality and led to conversations about how governance frameworks should address not just safety and innovation, but also equitable access and development.

Overall assessment

These key comments fundamentally shaped the discussion by introducing several important tensions and frameworks: the innovation-versus-governance debate, the knowledge-versus-data paradigm shift, the global-versus-local governance challenge, and the bias-elimination-versus-natural-diversity question. Rather than settling these tensions, the comments elevated the conversation to a more sophisticated level where participants grappled with complex trade-offs and nuanced positions. The discussion evolved from initial position statements to a more dynamic exchange where panelists directly engaged with each other’s frameworks, ultimately producing a richer understanding of AI governance challenges that goes beyond simple regulatory approaches to encompass questions of equity, access, knowledge ownership, and cultural context.

Follow-up questions

How can existing legal frameworks be adapted to be fit for purpose with AI technology, particularly in areas like antitrust/competition law, copyright law, and data protection?

Speaker

Jhalak Kakkar

Explanation

This addresses the gap between current regulations and the new realities that AI brings, such as network effects, data advantages, and fair use exceptions being leveraged by large companies in ways not originally intended.

How can we develop mechanisms to share AI model weights and preserve knowledge attribution while enabling interoperability between AI systems?

Speaker

Jovan Kurbalija

Explanation

This is crucial for preventing knowledge monopolization and ensuring that knowledge generated by communities can be preserved and attributed properly, while avoiding the platform lock-in problems seen with social media.

How can we implement AI impact assessments and auditing mechanisms in light-touch ways that don’t burden innovation but help us understand societal impacts?

Speaker

Jhalak Kakkar

Explanation

This addresses the need to understand AI’s impacts on society and individuals before path-dependencies are created, allowing for proactive rather than reactive governance.

How can we ensure meaningful participation from the global majority in AI governance processes, not just token representation?

Speaker

Jhalak Kakkar

Explanation

This is essential because AI will function and impact differently in different contexts, requiring diverse perspectives in governance frameworks rather than one-size-fits-all approaches.

How can we develop context-aware regulatory frameworks that address sector-specific AI applications while maintaining coherent governance principles?

Speaker

Mlindi Mashologu

Explanation

Different sectors (financial services, agriculture, healthcare) require different regulatory approaches, but there’s a need to understand how to balance specificity with consistency.

How can we establish clear responsibility and liability frameworks for AI systems, particularly in cases where AI makes errors or causes harm?

Speaker

Jovan Kurbalija

Explanation

Using the example of Diplo’s AI potentially misreporting statements, this highlights the need for clear accountability mechanisms similar to historical legal principles like those in Hammurabi’s code.

How can we create universal frameworks for AI governance while respecting local contexts and avoiding fragmentation?

Speaker

Kunle Olorundare

Explanation

This addresses the tension between having consistent global standards and accommodating different regional needs and priorities in AI governance.

How can we ensure inclusive data collection processes that account for multiple stakeholder perspectives and reduce inherent biases in AI training data?

Speaker

Kunle Olorundare

Explanation

This is fundamental to creating AI systems that work for everyone, as biased data collection by experts can perpetuate existing inequalities and exclusions.

How can we address the environmental and social justice impacts of AI infrastructure, particularly regarding data center placement and resource extraction?

Speaker

Anna from R3D

Explanation

This highlights the need to consider the broader impacts of AI development, including environmental costs and how they disproportionately affect communities in the Global South.

What are the implications of AI development and deployment in high-stakes sectors like finance and military, and how should these be governed?

Speaker

Michael Nelson (online)

Explanation

These sectors are investing heavily in AI but with little transparency about successes and failures, raising questions about oversight and accountability in critical applications.

How can we establish international coordination mechanisms that set common baselines for AI governance while allowing for innovation?

Speaker

Jhalak Kakkar

Explanation

This addresses the need to prevent a ‘race to the bottom’ in AI governance standards while maintaining space for technological advancement and regional adaptation.

How can we democratize access to AI technology and create AI commons to prevent concentration of AI capabilities in certain regions?

Speaker

Jhalak Kakkar

Explanation

This relates to ensuring equitable access to AI benefits and preventing the same concentration patterns seen in previous technology developments.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.