WS #187 Bridging Internet AI Governance From Theory to Practice

WS #187 Bridging Internet AI Governance From Theory to Practice

Session at a glance

Summary

This joint session of the Dynamic Coalition on Core Internet Values and the Dynamic Coalition on Network Neutrality explored how the internet’s foundational principles can guide AI governance as artificial intelligence becomes increasingly central to digital interactions. The discussion centered on two key questions: how internet principles of openness and decentralization can inform transparent AI governance, and how network neutrality concepts like generativity and fair competition can apply to AI infrastructure and content creation.


Vint Cerf emphasized that while the internet and AI are “different beasts,” AI systems should prioritize safety, transparency, and provenance of training data. He highlighted emerging standards like agent-to-agent protocols that could enable interoperability between AI systems. Sandrine Elmi Hersi from France’s ARCEP outlined three areas for applying internet values to AI: accelerating transparency in AI models, preserving distributed intelligence rather than centralized control, and extending non-discrimination principles to AI infrastructure and content curation.


Renata Mielli from Brazil’s CGI noted that while some internet governance principles like freedom and interoperability can transfer to AI, others like net neutrality may not directly apply since AI systems are inherently non-neutral. Hadia Elminiawi discussed Africa’s AI strategy and raised practical questions about implementing transparency requirements, suggesting that requiring open-source safety guardrails might be more feasible than full model transparency.


Several participants emphasized the challenge of market concentration in AI, contrasting it with the internet’s originally decentralized architecture. The discussion revealed tensions between promoting innovation and ensuring accountability, with speakers noting the need for risk-based approaches, liability frameworks, and multi-stakeholder governance. The session concluded with calls for transforming these principles into technical standards and regulatory frameworks while maintaining the collaborative spirit that made internet governance successful.


Keypoints

## Major Discussion Points:


– **Fundamental architectural differences between Internet and AI**: The discussion emphasized that while the Internet was built on open, decentralized, transparent, and interoperable principles, AI systems (particularly large language models) operate through centralized, opaque, and proprietary architectures controlled by a handful of major companies, creating tension between these two paradigms.


– **Applying Internet governance principles to AI governance**: Speakers explored how core Internet values like openness, transparency, non-discrimination, and net neutrality could be translated into AI governance frameworks, while acknowledging that some principles (like technical neutrality) may not directly apply since AI systems are inherently non-neutral.


– **Market concentration and gatekeeper concerns**: Multiple speakers highlighted the risk of AI systems becoming new gatekeepers that could limit user choice and content diversity, drawing parallels to earlier Internet governance challenges around platform dominance and the need for regulatory oversight to preserve competition and openness.


– **Global South representation and digital equity**: The discussion addressed how AI governance frameworks must include diverse global perspectives, particularly from Africa, Latin America, and Asia, to avoid replicating the digital divides and power imbalances that have characterized Internet development.


– **Practical implementation challenges**: Speakers debated the realistic prospects for international cooperation on AI governance, questioning whether major AI companies and governments have sufficient incentives to participate in multilateral governance frameworks, and emphasizing the need for risk-based approaches, liability frameworks, and technical standards.


## Overall Purpose:


The discussion aimed to bridge Internet governance principles with emerging AI governance challenges, exploring how decades of experience regulating Internet infrastructure and services could inform approaches to governing artificial intelligence systems. The session sought to move beyond theoretical frameworks toward practical implementation strategies for ensuring AI development remains aligned with values of openness, transparency, and user empowerment.


## Overall Tone:


The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers drawing encouraging parallels between Internet and AI governance challenges. However, the tone became more realistic and somewhat pessimistic as participants acknowledged significant obstacles, including corporate resistance to regulation, geopolitical tensions, market concentration, and the fundamental differences between Internet and AI architectures. Despite these challenges, the session concluded on a pragmatic note, with calls for continued collaboration and specific next steps for the working groups involved.


Speakers

**Speakers from the provided list:**


– **Olivier Crepin-Leblond** – Co-chair of the session, moderator for remote participation


– **Pari Esfandiari** – Co-chair for the Dynamic Coalition on Core Internet Values


– **Luca Belli** – Co-chair for the Dynamic Coalition on Network Neutrality


– **Vint Cerf** – Joining remotely from the US, works with a company that has invested heavily in AI and AI-based services, co-creator of internet networking protocols


– **Sandrine ELMI HERSI** – Representative from ARCEP (French regulatory authority for electronic communications), involved in shaping digital strategies within government


– **Renata Mielli** – Coordinator of CGI.br (Brazilian Internet Steering Committee), leading debates on net neutrality, internet openness and AI issues in Brazil


– **Hadia Elminiawi** – Representative from the African continent, discussing AI governance from African perspective


– **William Drake (Bill Drake)** – Commenter/additional speaker


– **Roxana Radu** – Commenter/additional speaker (participating remotely)


– **Shuyan Wu** – Representative from China Mobile (world’s largest telecom operator), commenter/additional speaker


– **Yik Chan Ching** – Representative from PNAI (Policy Network on Artificial Intelligence), intersectional process of IGF


– **Alejandro Pisanty** – Online participant, previously involved in core internet values dynamic coalition discussions


– **Audience** – Various audience members who asked questions (including Dominique Hazel Monsieur from W3C, and Andrew Campling – internet standards and governance enthusiast)


**Additional speakers:**


– **Dominique Hazel Monsieur** – Works for W3C (World Wide Web Consortium), oversees work around AI and its impact on the web


– **Andrew Campling** – Internet standards and internet governance enthusiast


Full session report

# Bridging Internet Core Values and AI Governance: A Comprehensive Report


## Executive Summary


This joint session of the Dynamic Coalition on Core Internet Values and the Dynamic Coalition on Network Neutrality examined how established internet governance principles might inform emerging AI governance frameworks. Moderated by Olivier Crepin-Leblond and co-chaired by Pari Esfandiari and Luca Belli, the discussion brought together international experts to explore the intersection between internet governance and AI systems.


The session revealed both opportunities and challenges in applying internet principles to AI governance. While speakers agreed on the importance of values like transparency and safety, they identified fundamental differences between the internet’s distributed architecture and AI’s centralized model. The discussion produced practical recommendations including risk-based governance approaches, technical standards development, and targeted interventions at AI-internet intersection points.


## Opening Framework and Central Questions


Pari Esfandiari opened by establishing the session’s premise: as generative AI becomes a primary gateway to content, internet core values must guide AI governance. She posed two key questions: how can internet principles of openness and decentralization inform transparent AI governance, and how can network neutrality concepts apply to AI infrastructure and content creation.


Luca Belli immediately introduced a fundamental tension, observing that “the Internet and AI are two different beasts.” He noted that while celebrating 51 years since foundational internet work, the internet was built on open, decentralized, transparent, and interoperable architecture, whereas AI operates through highly centralized architecture controlled by major companies. This architectural difference became a recurring theme throughout the session.


## Expert Perspectives


### Vint Cerf: Technical Standards and Safety


Vint Cerf, joining remotely, emphasized that AI systems should prioritize safety, transparency, and provenance of training data. He highlighted ongoing work on agent-to-agent (A2A) protocols and model context protocols (MCP) to ensure interoperability between AI systems, drawing parallels to internet protocols.


Cerf challenged purely centralized views of AI, noting that “every time someone interacts with one of those [large language models], they are specializing it to their interests and their needs.” He advocated for risk-based approaches focusing on user risk and provider liability, with higher safety standards for high-risk applications like medical diagnosis and financial advice.


### Sandrine Elmi Hersi: Regulatory Framework


Representing ARCEP (French regulatory authority), Elmi Hersi outlined a three-pronged approach: accelerating transparency in AI models to make “black boxes” more auditable; preserving distributed intelligence by ensuring plurality of access to AI development inputs; and extending non-discrimination principles from network neutrality to AI infrastructure and content curation.


She raised particular concerns about content diversity, questioning how to ensure diversity when AI chatbots provide single answers instead of hundreds of web pages traditionally offered by search engines.


### Renata Mielli: Brazilian Perspective


Coordinator of CGI.br (Brazilian Internet Steering Committee), Mielli noted that while some internet governance principles like freedom and interoperability can transfer to AI, others like net neutrality may not directly apply since AI systems are inherently non-neutral, unlike internet infrastructure.


She emphasized transforming principles into technical standards while distinguishing between governance and regulation, and highlighted the need to reduce asymmetries and empower Global South voices in AI governance discussions.


### Hadia Elminiawi: African and Practical Perspective


Elminiawi provided insights from the African continent, noting that African countries’ AI capabilities vary significantly due to infrastructure, electricity, connectivity, and resource differences. She challenged idealistic transparency approaches, asking whether it is “realistic or even desirable to expect that all AI models be made fully open source.”


She suggested requiring open-source safety guardrails rather than full model transparency, proposing a more pragmatic approach balancing openness with security and investment concerns.


## Additional Interventions and Perspectives


### William Drake: Critical Analysis


Drake provided a critical intervention emphasizing the need to define precisely what aspects of AI require governance rather than applying generic principles. He questioned whether there is genuine functional demand for international AI governance, noting that “we simply can’t just assume, because the technology is there and the issues are there, that there’s a functional demand.”


He suggested developing a detailed mapping matrix of which internet properties apply to specific AI contexts and applications.


### Andrew Campling: Social Media Lessons


Campling suggested looking at social media governance lessons rather than internet governance, emphasizing duty of care and precautionary principles. He noted the importance of learning from past failures in social media regulation.


### Dominique Hazel Monsieur: W3C Standards Work


Representing W3C, Monsieur highlighted ongoing work on AI and web standards, focusing specifically on the intersection of AI and internet technologies rather than broader AI governance.


### Yik Chan Ching: Policy Network Perspective


From the Policy Network on Artificial Intelligence (PNAI), Ching mentioned ongoing research on liability, interoperability, and environmental protection in AI systems, noting significant progress in AI standards development across regions.


### Shuyan Wu: Digital Equity Focus


From China Mobile, Wu emphasized ensuring equal access, protecting user rights, and bridging digital divides in the AI era.


### Alejandro Pisanty: Commercial Reality


Participating online, Pisanty questioned fundamental incentive structures, asking “Why would OpenAI, Google, Meta, et cetera… why would they come together and agree to limit themselves in some way?” He advocated for applying existing rules for automated systems rather than creating entirely new frameworks.


## Key Themes and Challenges


### Architectural Differences


The fundamental difference between internet and AI architectures emerged as a central challenge. The internet’s distributed design contrasts sharply with AI’s concentrated ownership and control, creating new governance challenges.


### Market Concentration Concerns


Multiple speakers highlighted concerns about AI market concentration and the emergence of new gatekeepers that could limit user choice and content diversity, drawing parallels to earlier internet governance challenges.


### Transparency vs. Practicality


A significant tension emerged between calls for maximum transparency and practical constraints including investment protection and security concerns. Speakers debated appropriate levels and mechanisms for AI transparency.


### Global South Inclusion


Several speakers emphasized including Global South perspectives and addressing existing digital divides to prevent their reproduction in AI governance frameworks.


## Areas of Convergence


Despite disagreements, several areas of consensus emerged:


– **Risk-based approaches**: Multiple speakers supported prioritizing governance based on risk levels and application contexts


– **Technical standards importance**: Strong agreement on the need for AI interoperability standards


– **Safety and transparency needs**: General agreement that AI systems require more transparency than currently provided


– **Stakeholder inclusion**: Consensus on the importance of diverse participation in governance discussions


## Implementation Recommendations


The session produced several concrete recommendations:


### Continued Collaboration


Participants agreed to continue discussions through Dynamic Coalition mailing lists to address unresolved issues.


### Detailed Mapping Exercise


Drake’s suggestion for developing a mapping matrix of internet properties applicable to specific AI contexts was endorsed as a practical next step.


### Regulatory Development


ARCEP committed to completing its technical report on applying internet core values to AI governance.


### Focused Interventions


Rather than generic AI governance, speakers recommended focusing on AI-internet intersection points where governance needs and stakeholder incentives may be clearer.


## Unresolved Questions


The discussion concluded with acknowledgment of fundamental questions requiring further work:


– How to balance innovation incentives with transparency and accountability requirements


– Whether binding international AI agreements are feasible given current political realities


– How to address liability and responsibility in multi-agent AI systems


– What constitutes genuine functional demand for AI governance versus assumed need


## Conclusion


This session revealed both promise and challenges in applying internet governance principles to AI systems. While there was agreement on core values like safety and transparency, fundamental tensions emerged between internet and AI architectures, transparency ideals and practical constraints, and governance aspirations and commercial realities.


The discussion produced pragmatic recommendations focusing on risk-based approaches, technical standards development, and targeted interventions. However, unresolved tensions around transparency requirements, stakeholder participation, and international cooperation indicate significant work remains to develop effective AI governance frameworks that preserve internet values while addressing AI’s unique characteristics.


The session demonstrated the value of diverse international perspectives while highlighting the need for continued dialogue and practical experimentation to bridge the gap between principles and implementation in AI governance.


Session transcript

Olivier Crepin-Leblond: Right, welcome everybody to this session, this joint session of the Dynamic Coalition on Core Internet Values and the Dynamic Coalition on Network Neutrality. I’m Olivier Crepin-Leblanc, and co-chair of this session is going to be Luca Belli for the Dynamic Coalition on Network Neutrality and Pari Esfandiari for the Core Internet. It’s great to see such a lot of you here. As Luca said, if anybody wants to step up over to the table here, they’re very welcome to do so. We are going to have a session that’s going to be quite interactive. So we’ll have the speakers speak and so on, and then we’ll see if we can have a good discussion in the room about the topic. I’m just going to do just a quick introduction of the speakers that we have. And so we’ll start with four speakers, each providing their angle on the topic. We’ll have Vint Cerf, who’s joining us remotely. Unfortunately, he couldn’t make it in person at this IGF. So he’s over in the US and he will actually let us know at some point when he will be online, because he is also as often doing more than one session at the same time. Actually, I am. I am online. He’s already there. Goodness gracious. OK, sorry, Vint. I have two eyes, but they both look in the same direction. I don’t know why. I should have also checked the screen. So Vint Cerf, Hadi Alminiawi, and then we’ll have Renata Mielli also here and Sandrine ELMI HERSI, who’s sitting next to me. After that, we’ll have what we call additional speakers. They’ll be commenting on what they’ve heard from the original, from the first set of speakers. There’s three commenters, William Drake, Bill Drake, Roxana Radu and Shouyuan Wu, who’s just arrived from China. So the very last minute managed to make it here. So welcome to all of you. And then after that, we’ll open it to a wider discussion. But I’m kind of wasting time. We’ve only got 75 minutes, so I’m going to hand the floor straight over to Luca and to Pari for the next stage. Thank you.


Pari Esfandiari: Thank you very much Olivia and welcome everybody. It’s great to be here with all of you So we convene this session bridging internet and AI governance from theory to practice Not just because things are changing fast but because the way we think about digital governance is being fundamentally reshaped. As technologies converge and accelerate, our governance systems haven’t kept up and at the center of this shift is artificial intelligence Let’s start with theory, the internet’s core value global, interoperable, open, decentralized, end-to-end, robust and reliable and freedom from harm These were not just technical features, but deliberate design choices that made the internet a global common for innovation, diversity and human agency. Now comes generative AI It doesn’t just add another layer to the internet It introduces a fundamentally different architecture and logic. We are moving from open protocols to centralized models gated, opaque and controlled by a handful of actors. AI shifts the internet pluralism towards convergence replacing inquiry with predictive narration and reducing user agency This isn’t just a technical shift. It’s about who gets to define knowledge, shape, discourse and influence decisions. It’s a profound governance challenge and a societal choice about the kind of digital future we want. If we are serious about preserving user agency, democratic oversight and an open, informative ecosystem, the core internet values can serve as signposts to guide us, but it needs active support, updated policies and cross-sector commitment. This is where the practice begins. The good news is we are not starting from scratch from UNESCO’s AI ethics framework to the EU AI Act, the US AI Bill of Rights and efforts by Mozilla and others. We are seeing real momentum to root AI governance in shared fundamental values. So yes, there is a real divergence, but also real opportunities to shape what comes next. And that’s our focus today. With that, I will hand it over to my co-moderator, Luca Belli. Thank you.


Luca Belli: Thank you very much, Pari and Olivier. And also, let me hold this. Is this working? Yes. Yes. Okay. Thank you. Are you sure? Because I’m not hearing myself. Is this working? I am here. Can you hear us? Okay. I’m sorry. It’s my headphone. It’s not working. It’s not useful when I have to hear myself anyway. All right. So thank you very much, Olivier and Pari, for having organized this and for having been the driving force of this session that actually builds upon what we have already done last year in our first joint venture that was already quite successful. And I think that what’s already emerged, I always say that it’s good to build upon the sessions and building blocks and reports that we have already elaborated so that we move forward, right? And actually something that already emerged as sort of consensus last year in Riyadh are two main points. First is that we have already discussed for pretty much 20 years, at least here at IGF, internet governance and internet regulation. And so we can start to distill some of those teachings and lessons into what we could apply to regulate the evolution of AI and AI governance. And second, and to quote what Vint, the expression Vint used last year, the Internet and AI are two different beasts. So we are speaking about two things that are two digital phenomenon, but they are quite different. And the Internet, as Pari was reminding us very eloquently, has been built on an open, decentralized, transparent, interoperable architecture that made the success of the Internet over the past 70 years, 50 years, 50 years at least since Vint penned it in 1974. And yeah, but the question here is how we reconcile this with a highly centralized AI architecture. And I think that here there is a very important point we have been working on, on net neutrality and Internet openness debate over the past year, that is the concept of Internet generativity that we have enshrined in the documents, the report we have elaborated here over the past years, which is the capacity of the Internet to evolve thanks to the unfiltered contributions of the users, is the consequence of the fundamental core Internet values. Openness, transparency is to create a level playing field, a capacity to innovate, to share and use application services content and to make the Internet evolve according to how the users want to do. So users, not only users, passive users are prosumers. They create the Internet. Now, this is in fundamental tension with an AI that is frequently proprietary, non-interoperable, very opaque, both in the data sets that are used for training, that usually are the result of massive scraping of both personal data and copyrighted content in very peculiar ways that might be considered illegal in most countries with data protection or copyright legislation. And then the training and the output of it is very much opaque for the user. And very few companies can do this and supply this. So there is an enormous concentration phenomenon ongoing, which is quite the opposite of what the original internet philosophy was about. Now, to discuss this point, we have a series of fantastic speakers today. I think that, as I was mentioning before, as we are celebrating 51 years of the paper by Vint and Bob Kahn on the internet networking protocol, and a protocol for internet working networks, right, if I’m not mistaken. I think the first person that should go ahead should be Vint. So Pari, please, the floor is yours to present Vint.


Pari Esfandiari: Thank you very much. We have two actually overarching questions. And we would like our speakers to focus on those two overarching questions. I would read it for you. How can the internet’s foundational principles of openness and decentralization guide transparent and accountable AI governance, particularly as generative AI becomes a main gateway to content? And the second question, how can fundamental network neutrality principles such as generativity and competition on a level playing field apply to AI infrastructure, AI models, and content creation? So Vint, drawing on your unique experiences in both funding architecture of the internet and your work with the private sector, we are curious to hear your comments on these questions. Over to you.


Vint Cerf: Well, thank you so much for this opportunity. I want to remind everyone that I am not an expert on artificial intelligence. I barely manage my own intelligence, let alone artificial. But I work with a company that has invested very heavily in AI and in AI-based services. So I can reflect a little bit of that in trying to respond to these very important questions. The first thing that I would observe is that the Internet was intended to be accessible to everyone. And I think the AI efforts are reflective of that as well. The large language models, well, let me distinguish between large language models and machine learning tools for just a moment. All of you are well aware that AI has been an object of study since the 1960s. It’s gone through several iterations of phases, the most recent of which is machine learning, reinforcement learning, and then large language models. The reinforcement learning mechanisms have given us things like programs that can beat the best players of Go, programs that can tell you how proteins fold up, and that tells us something about their functionality. And more recently, there’s something at Google called Alpha Evolve, which is an artificial intelligence system that will invent other software to solve problems for you. The large language models that we interact with embody huge amounts of content, but they are specialized when they interact with the users. You use the term prompting to elicit output from these large language models. And the point I want to make here is that every time someone interacts with one of those, they are specializing it to their interests and their needs. So in a sense, we have a very distributed ability to adapt a particular large language model to a particular problem or to respond to a particular question. And that’s important, the fact that we are able to personalize. Our interactions with these sources of information is a very important element of useful access. The question about interoperability of the various machine learning systems is partly answered by the agent model idea. That is to say, the large language models are becoming mechanisms by which we can elicit not only responses, but also actions to be taken. So the so-called agentic generative AI is upon us. And consonant with that are two other standards that are being developed. One is called A2A, or agent-to-agent interaction, and the second is called MCP, which is a model context protocol to give these artificial intelligence agents a concept of the world in which they’re actually operating. The reason these are so important, and they create interoperability among various agentic systems, is that it’s very important for precision. It’s important that the agents, when they interact with us, and when they interact with each other, to have a well-defined context in which that interaction takes place. And we need clarity, and we need confidence that the semantics are matched between the two agents. If anyone has ever played that parlor game called telephone, where you whisper something in someone’s ear, and then they whisper in the next person’s ear, and you go down the line, and whatever comes out on the other end is almost never what started out at the beginning. We don’t want chains of agents to get confused, and so the A2A and MCP are mechanisms. to try to make that work a lot better. So I think this is a very important notion for us to ingest into the work of the core internet values, except they will have to become core AI values, which is clarity in interaction among the various agents, of course, among other things. Last point I would make is that as you interact with large language models, the so-called prompting exchanges, one of the biggest questions that we always have is how accurate is the output that we get from these things? We all know about hallucination and the generation of counterfactual output coming from agents. It’s very important that provenance of the information that is used by the agents or by the large language models and references be available for our own critical thinking and critical evaluation of what we get back. And so once again, that’s a kind of core internet value. How do I evaluate or how can I evaluate the output of these systems to satisfy myself that the content and the response is accurate? So those are just a few ideas that I think should inform the work of these dynamic coalitions as we project ourselves into this online AI environment. But I’ll stop there because I’m sure other people have many more important things to say in response to these questions.


Pari Esfandiari: Thank you very much, Brent, for that very informative discussion. And with that, I would go to Sandrine. Sandrine, based on your experience shaping digital strategies within government, how would you see this? Thank you.


Sandrine ELMI HERSI: Thank you. And let me first start to say that it’s a real pleasure to… Thank you all for joining this session today and to discuss this important topic with partners from the Net Neutrality and Core Internet Values Coalition. And before we ask how to apply openness and transparency to AI governance, I would like to insist on the why and why this application has become essential. So as it was already covered by Vint, LLMs, notably Generative AI tools, are becoming a new default point of entry to online content and services for users. Since our conversation at the last IGF at Riyadh, we’ve been seeing this trend accelerating through the development of the use of individual chatbots, but also the establishment of response engines integrating into mainstream search tools. Generative AI is also increasingly embedded directly in end-users’ devices. And we are also seeing a shift from early-generation LLMs to new RAG, so Retrieval Augmented Generation systems that are now included in AI tools and that can directly draw from the web. And looking ahead, agentic models could also centralize a wide range of users’ actions into a single AI interface. So the question is really, will tomorrow’s Internet still be open, decentralized and user-driven if most of our online action is mediated by a handful of AI tools? So now, regarding the how, ARCEP, the French regulatory authority for electronic communications, is currently conducting technical hearings and file testing with a team of data scientists to explore this very question. All through our report is currently in development, we can already identify three main areas for action to apply internet core values to AI governance. The first area is accelerating on AI transparency, understanding generative AI models, what data they use, how they process information, and what limits they have is a prerequisite for trust. There is some progress, more and more players are now engaging with researchers and through sectorial initiatives such as standards and code of conduct, but many models remain black boxes. We need greater openness, especially to research, to the research community, to improve auditability and explainability, but also the efficiency of models. The second area is preserving the notion of intelligence at the age of networks, which is the original spirit of the internet, intelligence distributed among users and applications, not centralized in platforms or infrastructure. We must notably ensure that users remain able to choose among diverse services and sources. This may require working on the technical and economic conditions that shape AI outputs to guarantee a certain level of neutrality, plurality of views, and openness to a diverse range of content creators and innovators. Last but not least, regarding the principle of non-discrimination that is also at the center part of net neutrality. So net neutrality, non-discrimination principle, was originally applied to prevent… Internet Services Providers from privileging their own services or partners in vertical markets. But today’s ISPs are not the only digital digital gatekeepers that can narrow the perspective and freedom of choice of end-users. So at ARCEP, we are now working on assessing to what extent this principle of non-discrimination and openness can be extended to AI infrastructure, AI models, but also how AI curates and presents contents. On this, very shortly, we notably investigate two questions. The first one is how to preserve the openness of AI markets, notably through ensuring that plurality of economic players have access to key inputs necessary for LLM developments, including data, computing resources, but also energy. And the second question we are also diving into is ensuring that we keep a diversity of content on the internet, knowing that when they use AI chatbots and response engine, end-users only have access to one answer instead of hundreds of web pages. So we must ensure that generative AI is not just simply amplifying already dominant sources, but is open to smaller and independent content creators and innovators. That might mean in the future working on defining sector-wide frameworks or interconnection standards on fair contractual conditions, as it was done for IP interconnection. And to end, the goal is not, of course, to block innovation, but on the contrary, to make sure that innovation and AI are compatible with preserving internet as a common good.


Luca Belli: Thank you very much Sandrine for these very excellent thoughts and I think it’s very good to see how you are illustrating that what has been done in terms of internet openness regulation or net neutrality debates over the past 15 years is precisely trying to enshrine into law the original philosophy of openness transparency and decentralization and priority of the internet and make sure that when what we can call gatekeepers or point of controls emerge they behave correctly and if necessary a law protects the rights of the users and the regulator oversees the laws to make sure that the obligations are implemented. Now what is very difficult now is to understand who are the new gatekeepers and how to implement the law that maybe still does not even exist in these terms. So I would like to now give the floor to Renata Mielli who is the coordinator of the CGI in this moment and CGI has been also leading the debate on net neutrality internet openness and now also many AI issues in Brazil. So Renata the floor is yours.


Renata Mielli: Thank you Luca and thank you all for inviting me for this session especially because I believe we are establishing a continuity and we are deepening the debate we started in Riad when we discuss AI from a perspective of sovereignty and the empowerment of the global south last year and how to reduce the existing asymmetries in this field and now we are talking about how to bridge the internet governance and principles to AI principles and governance and to contribute to this session I choose to look at the world We have considered the work we have done in CGI.br on principles for the Internet and try to reflect on what makes sense and what does not make sense when we’re thinking about AI in a perspective of establishing a set of principles for development, implementation and use of AI technologies taking into account what Luca just said about the differences, the high economic concentration, the opacity of the systems and taking into account also what Vint said, there are two different principles. In this sense, I would like to start by looking at what is not covered when we are talking about AI in these ten principles. The first thing I see and a lot of people are mentioning a lot is transparency and explainability because these two principles are very essential when we talk about AI because it involves a series of procedures that are not in the same way when we are dealing with the Internet. Internet is open, Internet is decentralized, all the protocols are built in a very collaborative way, but this is not the case of AI. So AI system governance and deployment and development needs to ensure high levels of transparency especially for the social impact assessment of this type of technology as well as for the creation of compliance process that ensures other principles like accountability, fairness and responsibility. We are discussing a series of specific principles for AI that were not necessarily conceived in the context of internet governance. In terms of CJIS Decalogue, I’d like to point out which ones can be, in some way, interoperable with AI principles. In this case, I think, of course, freedom, human rights, democratic and collaborative governance, universality in terms of access and the benefits of AI for all, diversity when talking about language, culture, the necessity of inclusion of all kinds of expressions, also standardization and interoperability between the various models and, of course, we need legal and regulatory environment for these systems. We can think that the perspective used for the internet governance is applicable to AI principles in context. From another perspective, principles like security need to be addressed with two other principles, safe and trustworthy, and ethical, I point another one, so they can be answered with discussion about impacts on rights like privacy and data protection. Finally, an important part of this exercise of evaluating internet governance principles and their possible alignment with AI governance principles is to identify what was conceived for the internet that is not applicable in the AI context. In this aspect, only to mention because I don’t have more time, I point to the principles of net neutrality because the proposed here is to present have observed net neutrality in relation to telecommunications infrastructure and this is not applicable to AI. And there is neutrality in the technology itself. AI is not neutral. And I think in inputability also is another principle that is not easily transferred from the internet to AI because here we have to understand the responsibility in the AI chain. So these are some thoughts I have to share in the beginning of this panel. Thank you very much.


Luca Belli: Thank you very much, Renata. And actually you also bring into the picture something extremely relevant I think for which the IGF is also an appropriate forum, being a UN forum. The fact that we have been debating this for 20 years. There are a lot of debates also going on in the Global South about this since at least 20 years. But what we see in terms of mainstream debates and policymaking and even construction of AI infrastructure, especially cloud infrastructure is an enormous predominance of what we could call the Global North. So it’s very interesting to start to bring into the debate the Global South voices. We’ve started with Brazil. Now we are continuing with Ms. Adia Elminaoui that is here representing the African continent, which is an enormous responsibility. So please, Adia, the floor is yours.


Hadia Elminiawi: Thank you. Thank you so much. And I’m happy to be part of this very important discussion. So let me first start to highlight the similarities between AI and the Internet that make the Internet’s core values well suited as a foundation for AI governance. AI can be considered one of those general-purpose technologies impacting economic growth, maybe quicker than any other general-purpose technology that has emerged in the past, such as steam engines, electrifications, computers. AI is driving revolutionally changes in all aspects of life, including healthcare, education, agriculture, finance, services, policies, and governance. By definition, AI isn’t just one technology, but it’s a constellation of them, including machine learning, natural language processing, and robotics that all work together. Similarly, the Internet stands as another powerful general-purpose technology that has fundamentally changed the way we live, work, and interact, enabling new ways of communication, education, services, provision, and conducting businesses. The Internet infrastructure is foundational to artificial intelligence, enabling cloud services, including managing on-site data centers and real-time applications. In addition, many of the services and applications that are being delivered over the Internet infrastructure are using AI to deliver better experiences, services, and products to users. So when it comes to Africa, the capabilities of African countries regarding AI vary significantly across the continent due to differences in the availability of resources, infrastructure, including reliable and efficient electricity, broadband connectivity, data infrastructure like data centers and cloud services, accesses to sets of quality data, AI-related education and skills, research and innovation, and investment. So last year in July 2024, the African Union Executive Council endorsed the African Union Continental AI Strategy. The Continental AI Strategy is considered pivotal to achieving the aspirations of the Sustainable Development Goals. And likewise, the internet plays a critical role in achieving the Sustainable Development Goals. No poverty, good health and well-being, quality of education, digital industry, innovation and infrastructure. Other relevant regulatory approaches around the globe include EU’s AI Act adopted in 2024, the Executive Order for Removing Barriers to American Leadership in AI in January 2025, sectoral oversight in the US, the UK Framework for AI Regulation, the 2023 G7’s Guiding Principles and Code of Conduct, and China also has developed some rules, Egypt also has its second edition of the National Artificial Intelligence Strategy in 2025. So in all those strategies, we see some of the core principles that have shaped the internet, such as openness, interoperability and neutrality, guiding various AI governance strategies. So the question now becomes, how do we translate those agreed principles and frameworks into actions? And in some cases, what do those principles in practical terms mean or look like? So let’s look at openness and transparency. What does this mean?


Luca Belli: Hadia, may I ask you to wrap up in 30 seconds?


Hadia Elminiawi: Yes, sure. So that would be very quick. I’m almost done. So open access to research and requiring AI model, maybe. It means open access to research and requiring AI models to include components for full understanding and auditing. But what does ensuring transparent algorithms in practical terms mean? Is it realistic or even desirable to expect that all AI models be made fully open source? Given the amount of capital investment in these models, requiring complete openness could discourage investment in AI models, destroying a lot of economic value and hindering innovation. At the same time, transparency and openness raise some important ethical and security concerns. Is it truly responsible or logical to allow unrestricted access to tools that could be used to build weapons or plan harmful disruptive actions? We may need layered safeguards. AI algorithms on top of other AI algorithms to ensure responsible and secure use. So what alternative solutions can we consider? One possibility could be requiring all AI developers to implement robust safety guardrails and have these guardrails open source rather than the models themselves. In addition, AI developers could be required to publish the safety guardrails that they have put in place. I guess this is an open discussion. And with that, I would like to wrap up and thank you.


Pari Esfandiari: Thank you very much, Hadia. And on that, I think that I want to thank all the panelists for their insightful contribution. And now I want to invite our invited community members to comment on what they have heard. So you are also welcome to share your own views on the broader issues we have touched upon. And on that, I would start with Roxana. Roxana Pardue, you have five minutes. Please start.


Roxana Radu: Thank you very much. I’m sorry for not being able to join you in person. I would just like to start. Let me start by saying that there is a flourishing discussion now around ethics and principles in AI governance. In fact, it’s what we’ve seen developed over the last five or six years. It’s a plethora of ethical standards and guidelines and values to adhere to. But the key difference with internet governance is the level of maturity in these discussions and also the ability to integrate those values that are newly identified into technical policy and legal standards. What we’ve done in internet governance over the last 30 years is much more than identifying core values. We apply them, we’ve embedded them into core practices, and we are continuing to refine these practices day by day. I think there are four key areas that require attention at this point in time where we can bridge the internet governance debates and the AI governance discussions. First is the question of market concentration. Look, I was already alluding to gatekeepers, how do we define them in this new space? Highly concentrated ownership of the technology, of the infrastructure, and so on and so forth. Second is the diversity and the equity in participation in engaging different stakeholders, but also stakeholders from parts of the world that are not equally represented. Thirdly, there is the hard-learned lesson of personal data collection, use, and misuse. We have more than 40 years of experience with that in the internet governance space, and we’ve placed emphasis on data minimization, to not collect more than what you need. This lesson does not seem to apply to AI, in fact it’s the opposite. Collect data even if you are not sure about its purpose currently, machines might figure out a way to… to use that data in the future, is the opposite of what we’ve been practicing in recent years in internet governance. And fourthly, and very much linked to these previous points, there’s a timely discussion now around how to integrate some of these core values into technical standards. With AI, there seems to be a preference for unilateral standards, the giants developing their own standards, sharing them through APIs, versus globally negotiated standards, where a broader community can contribute. And those voluntary standards could then be adopted by companies and by participants in those processes more broadly. I think we need to zoom in on some of these ways of bringing those core values into practices. And it’s very opportune to do that now at the IGF. Thank you.


Luca Belli: Thank you very much, Roxana. And I think that there are some interesting points that are emerging here. Also something that I want to very briefly comment on, because it was raised before, is that we are discussing here how core internet values can apply to AI. And I think it’s interesting to do this in joint venture with the Coalition on Net Neutrality, because net neutrality is actually the implementation of core internet values into law. And as any lawyer that has studied Montesquieu would tell you, what counts in the law is the spirit of the law, right? I remember 10 years ago writing an article on the spirit of the net, where I was mentioning precisely net neutrality was the enshrining into law, the spirit of the net, the core internet values, right? And so we now have to understand a way to translate this into an applicable way to AI. And I think that is the huge challenge we have here today. And I’m pretty sure that our friend Bill Drake knows how to solve this challenge. Believe the floor is yours.


William Drake: Obviously I do not. Thank you. Okay well first of all I congratulate the organizers of this session on putting together an interesting concept. I mean trying to figure out how you map internet properties and values into the AI space I think is definitely a worthwhile activity. As Roxana noted it kind of builds on all the discussions at international level in recent years about ethics whether in UNESCO and other kinds of places and I think that you know it’s it’s worth carrying this forward but I would start by noting just there’s a few constraining factors. Three in particular. First conceptually let’s bear in mind again going back to what Vint said we’re talking different beasts you know we’re not talking here about a relatively bounded set of network operators and so on we’re talking about a vast and diverse range of AI processes and services in an unlimited range of application areas from medicine to environment and beyond so which internet properties will apply generally or in specific context simply can’t be assumed. We need to do close investigation and mapping and I think there’s a great project there for somebody who wants to develop that matrix. I look forward to reading whoever does that first. There are reasons to wonder whether some of these things really do apply clearly. Renata suggested that net neutrality for example might not be so directly applicable. There’s a lot of other challenges I think there intellectually. Secondly of course is the material interests of the private actors involved. Luca referred to the concentration issues. It’s nice to think about values but I wouldn’t expect all the US and Chinese companies that are involved in this space to join an AI engineering task force and hum their support for voluntary international standards. To the contrary they’ve kind of demonstrated that they’ll do pretty much anything to promote their interests at this phase including sponsoring military parades for dear leaders in Washington. and so on. So it’s unclear how much they would embrace any kind of externally originated constructs like neutrality, openness, transparency, etc. that don’t really fit well into their immediate profitability profile and how well these things would apply to very large online platforms and search engines, etc. Again, real challenges there. And lastly, of course, the material interests of states. Net neutrality, of course, is verboten in the United States now. Applying it to AI, of course, would be too. Generally speaking, multilateral regulatory interventions are impossible to contemplate in the Trump era, at least for those of us who are in North America. And I’m not sure what China would sign on to in that context. So in principle, you would like to think, though, that transparency and openness with regard to governance processes, especially international governance processes, could be pursued. And there, you know, I would just like to flag a couple of quick points before I run out of time. Lessons from Internet governance, I think, that are relevant. One, first, we have to be real clear about where there’s an actual demand for international governance and regimes and the application of these kinds of values and so on. We simply can’t just assume, because the technology is there and the issues are there, that there’s a functional demand. You know, often people point to things and say, oh, there’s some new phenomena. We must have governance arrangements. But very often the demand for governance arrangements is not equally distributed across actors, and those highfalutin aspirations don’t get fulfilled. So, you know, I mean, we used to talk about safety, right? There was a lot of international discussion around safety. Now suddenly safety is out the window, and we’re all talking about, well, we want to promote innovation and investment. So, you know, it’s easy to say that we have this kind of this demand to do all these new wonderful normative things, but in reality, you know, when push comes to shove, we have to look at where there’s a real functional demand. Where do you actually need international governance, interoperability or harmonization of rules. In the telecom space, if you look historically, right, radio frequency spectrum, we had to have non-interference. Telecom networks had to be interconnected and have standards to allow the network to pass traffic between them. So there was a strong incentive for states to get on board and do something even if they had different visions of how to do that and they could fight over it. What do those aspects of the AI process that absolutely require some kind of coordination or harmonization, it’s not entirely clear and I think we can’t just assume that. Other last point, I don’t want to run out of time, is just to say, going as somebody who was around 20 years ago and remembers all the fights over internet governance and what is internet governance and so on, we are in a liminal moment like we were 20 years ago where people are not clear what is the phenomena, how do we define it, what does governance mean in this context, etc. This requires a great deal more thinking about when you’re applying it to the specificities of the AI space. I think I hear a lot of these discussions in the UN where people seem to be just grafting constructs from other international policy environments onto AI and saying, well, we’ll just apply the same rules that apply elsewhere. And this is like saying that we’ll apply the rules from the telegraph to the telephone and from the telephone to television and, you know, every new technology, we’re going to look at it through the lens of previous technologies, but often that doesn’t work so well. And my last point, and then I’ll stop, multilateral action I’d be very careful in thinking about. I noticed that the G77 in China in a reaction to the co-facilitators’ stuff on the AI is saying that they want binding international commitments coming out of the UN process, that they will not accept purely informal agreements coming out of the UN process. I look at what’s going on in the AI space, I’m thinking, seriously, what kind of binding international agreements are we going to begin negotiating how in the United Nations? in the near term. And if you set that up at the front end is the object that you’re trying to drive towards, you can just see how difficult all this is going to become very quickly. So I probably went over five minutes, so I’ll stop. Thank you.


Pari Esfandiari: Thank you very much, Bill. And for the sake of time, I’m not going to reflect. You packed an awful lot of information on that that section, but we don’t have enough time. So therefore, I directly go to Shuyan Wu. Shuyan Wu, floor is yours.


Shuyan Wu: Okay, thank you. Hello, everyone. It’s a great pleasure to hear, to attend this important discussion. I am from China Mobile, one of the world’s largest telecom operator. So I’d like to share the practices and experiences from China Mobile to bridge internet and AI governance. In the age of internet, we continue to promote the development of the internet ecosystem towards fairness, transparency, and inclusiveness. This commitment is reflected in our efforts across infrastructure development, user rights protection, and bridging the digital divide. Firstly, in terms of infrastructure development, we strive to ensure equal access and tool and inclusive use of internet services. China Mobile’s mobile and broadband networks now cover all villages across the country. We’ve also built the world’s largest and most extensive 5G network. Second, when it comes to protecting users’ rights and interests, we work actively to create a transparent and trustworthy online environment. We provide clear user-friendly service mechanisms and have introduced quality management tools to ensure users’ right to information and independent decision-making. For specific groups such as elderly and minors, We focus on fraud prevention education and offer customized services to build a safer and greener digital space. Third, to bridge the digital divide and support inclusive growth, we’ve implemented targeted solutions. For elder users, we offer dedicated discounts and have tailored our smart services to their needs. For minors in rural areas, our 5G smart education cloud network services are helping to reduce the gap in education resources between urban and rural communities. As the transition from the internet era to the age of AI, China Mobile is actively adapting its experience and capabilities to the evolving needs of AI governance. We’re striving to build a digital ecosystem featuring universal access, decentralization, transparency, and inclusiveness. We are investing in AI infrastructure to promote resource sharing and encourage decentralized innovation, backed by our strong computing power, data resources, and product solutions such as large language models and AI development platforms. At the same time, we continually leverage AI capabilities to build a transparent and trustworthy digital environment, effectively safeguarding user rights. For instance, China Mobile applies AI-powered information detection technologies in scenarios like video calls and financial services to help users identify false or harmful content. Moreover, we are committed to ensuring that the benefits of AI are shared by all. For minors, we launched personalized education and eminence education scenario interaction solutions. For the elderly, we offer AI-powered entertainment, health monitoring, and safety services and for rural areas our smart village doctor system delivers quality health care to remote communities. That’s all for my sharing. Thank you.


Olivier Crepin-Leblond: As everyone points over to me thank you very much and now we’re going to open the floor for your input and for your feedback on what we’ve had so far. I’m the remote participation or online participation moderator as well and there’s been a really interesting debate going on online. I’m not sure how many of you have been following this. I was going to ask whether we could have the two main participants that were speaking back and forth online Alejandra Pisanti and Vint Cerf as well because Vint of course is always active both online and with us. So and then we’ll have after those two then we’ll start with the queue also in the room. Yeah all right let’s get going Alejandro you have the floor. Thank you.


Alejandro Pisanty: Good morning. Can you hear me well? Yes very well. Thank you. Thank you. I was making these points also in previous discussions of the core internet virus dynamic coalition. If you are trying to look at translating what the experience of governance from the internet to AI to artificial intelligence I think there’s a few points that are valuable to take into account and many of them have been made already so I’m trying to just group them. First is you have to define pretty well what you want to govern. What branch of the enormous world of artificial intelligence you actually want to apply some governance. Otherwise you’ll have some more serious ill effects. Using AI for molecular modeling, protein folding and so forth that kind of problem or using it as a back office system for detecting fraud in credit cards and so forth are very different beasts in turn. So it’s very important not to regulate one of them let’s say regulates with such generality that rules from one of them will impede progress in other ones where they are absolutely not necessary. Second, what I think is very important, and we learned this from internet governance, from 30 years of internet governance, is make sure you are governing the right thing in the following sense. What does AI, as the internet in turn, bring new to things that we already know? What rules do we already have that we can just apply or modify for taking into account AI? For example, we have purchasing rules, especially in governments, where you know the constraints that you have on systems that you can buy for government, like they cannot be discriminatory, they cannot be harmful, and so forth. So you can apply those rules instead of creating a whole new world. It’s like medical devices, for example, you already have so many rules for automated medical devices, you can extend those to artificial intelligence, the harms and the consequences of the harms. So these will be different, they will be amplified, there’s probability, there’s uncertainty, but we know how to deal with that and we just need to change the scale and a better understanding of these factors. Next, what do you expect to obtain from governance? Do you want more competition? Do you want a reduction of discrimination and bias? Do you want more respect for intellectual property? Do you want more access to global resources for the global south, and so forth? Because this will determine the institutional and organizational design. And next and most important, and this is something that a NetMundial plus 10 meeting, for example, does with other good that it has, is how do you actually bring these different stakeholders? Who are the stakeholders and how do you bring them to the table? If you want to regulate large language models provided over the internet for chatbots, like are the dominant aspect of public discussion these days. Why would they come to the table? Why would OpenAI, Google, Meta, et cetera? not to speak about Mistral and certainly the providers in China and other countries which are operating under completely different sets of rules, why would they come together and agree to limit themselves in some way? Also to sit at the table with people who are their users or their clients, potentially their competitors if something arises from their innovation. And especially, how do you bring them together to put some money into the operation of the system? To agree to have a structure, to agree to have their hands tied in some extent. What has happened, for example, in internal governance is very different things for, let’s say, the domain name system and fighting phishing and scams. For the domain name system, you have companies fearing that strong rules for competition would come from the US government and they agreed finally to come together with civil society and the technical community, which is also a key point. The experts have always to be at the table. As the ICANN paper has stated very recently for intergovernance, the technical community is not one more participant. It’s a pillar and you need to know what the limitations and the capabilities of that technology are. I’ll stop there. Thank you.


Olivier Crepin-Leblond: Thank you, Alejandro. OK, next, Vint Cerf.


Vint Cerf: First, I have to unmute. So thank you so much, Alex. I always enjoy your line of reasoning. Let me suggest a couple of small points. The first one is that with regard to regulation of AI based applications, I think a focus of attention should be on risk to the users of those technologies and of course, potential liability for the provider of those applications. So a high risk application such as medical diagnosis or recommended medical treatment or maybe financial advice ought to have a high level of safety associated with it, which suggests that if there is regulation. the provider of the service has to show due diligence that they have taken steps that are widely agreed to reduce risk for the user. So risk is probably a very important metric here. And concurrently, liability will be a very important metric for action by the providers of AI-based services. I think that another thing which is of significance is the provenance of the materials used to train these large language models, for example, and explainability chain of reasoning, chain of thought, those sorts of things to help us understand the output that comes from interacting with these large language models. And finally, I mentioned this earlier, but let me reiterate that the agent-to-agent protocol and the model context protocols are there, I think, partly to make things work better, more reliably, but they might also be important for limiting liability. In other words, there’s a motivation for implementing these things with great care and designing them with great care so that it’s clear, for example, in a multi-agent interaction, which agents might be responsible for what outcomes. Again, something that relates to liability for parties who are offering these products and services. So I’ll stop there. I hope that others who are participating in this will be able to elaborate on some of these ideas.


Olivier Crepin-Leblond: Thank you, Vin. Just one point. Earlier in the chat, you mentioned, I’m seeing here, indelible ways to identify sources of content used to train AI models. Could you explain a bit?


Vint Cerf: Yes, I was trying to refer to provenance here. The thing that people worry about is that the material used to train the model may be of uncertain origin. And if someone says, well, how can I rely on this model? How do I know what it was trained on? Here, I think it should be possible to identify what the sources were in a way that is incontrovertible. Digitally signed documents or materials that whose provenance can be established is important, because then we can go back and say to the parties providing those things or ask them questions about the verifiability of the material that’s in that training data.


Olivier Crepin-Leblond: OK, thanks very much for this and apologies for the for the wait. But please over to the gentleman standing at the microphone and please introduce yourself in your intervention.


Audience: Thank you. Yes. Hi. Thank you for the excellent panel. I’m Dominique Hazel Monsieur. I work for WCC where among other things, I oversee our work around AI and its impact on the web. So this is a place where a lot of web standards are being developed. I guess I wanted to make two remarks, one on scope and one maybe on incentives for governance. One of the topics that was brought up in terms of scope, we are at the IGF and it’s been mentioned a number of times. AI is extremely broad. One, I think, useful way to segment the problem is to look at the intersection of AI and Internet. And there are a number of those like AI has been fed from a lot of web content. A lot of web content is now being produced through AI. AI is starting, as Vint was describing, to be used as agents on the web and on the Internet. So looking exactly at these intersections and what AI changes to the existing. and they are all critical components of their strategy about both building their tools and distributing the tools. And they can only make that remain true if they don’t impoverish the ecosystem to a point where there is no more content they can feed on or no more services that could accept to reuse or integrate with the system. So at the end of the day, I think it’s really a matter of in particular in this emerging agents architecture that Vint was describing that we understand what are the expectations for these agents learned from rules that already exist. For instance, in the web space, we have a number of very clear expectations as to what you ought to do if you’re a browser, literally a user agent. And understanding how they apply to AI-based agents I think is going to be hopefully very illuminating about what kind of governance we should put in place around that.


Olivier Crepin-Leblond: Thank you very much for your intervention. And the next person in line, please introduce yourself.


Audience: Yeah. Hi, sorry. My name is Andrew Campling. In this context, I’m an internet standards and internet governance enthusiast. To build on Bill’s comments, I probably wouldn’t start from here either. But here we are, and we’re probably too late to be somewhat pessimistic. But if I was going to look anywhere to start, it wouldn’t be the internet. I’d probably look closely at lessons from social media specifically, where we’ve got, in my opinion, a small number of highly dominant players who are disinterested in collaborative multi-stakeholder initiatives, unless they’re commercially worthwhile to them. If we look to the internet model, and we try to collaborate, build a multi-stakeholder governance model, I don’t think there’s a commercial imperative for the players to do that. It’s far too easy to game the system and take a long time, and by the time something’s agreed, it will be irrelevant. So if I was to start anywhere, I’d look closely at duty of care as a key requirement, and also explore why we wouldn’t apply the precautionary principle widely, and use those as two foundational building blocks. I wouldn’t start with internet governance. So apologies for the pessimism, but I think we have to be pragmatic and realistic where we are. Thank you.


Olivier Crepin-Leblond: I should say, this is quite a British intervention. Okay, thank you so much. Pass it over to Luca, or should we go for the conclusions, because there’s only about six minutes. Yeah, I think we can go.


Luca Belli: We have six minutes. Do we have any other comments or questions in the room? I don’t see any hands. We have exhausted the comments from the online participants. I think we can go for a round of very quick conclusions, like very prehistoric tweets of 240 characters. We don’t have time, because we’ve got to go now to Jek. Well, Jekshan is the person. Oh, Jekshan. Yeah, sorry. Sorry. We have already Jekshan that has a chat GPT, will distill all the knowledge in five minutes result.


Yik Chan Ching: Okay and thank you very much for giving me the five minutes to make some comment. Actually I’m from PNAI, you know the police level of artificial intelligence which is also the intersectional process of IGF. So it’s very interesting to have the joint section between the PNAI and the DCs. Yeah I found that discussion is really fascinating and so my observation is that there’s a two observation also based on the PNAI’s past three years research on the AI governance. For example we did a two report on liability and interoperability or these big issues and environmental protections. So I think there’s a two issue I would like to make some comments. The first one is about the institutional secting and because Bill asked how can we you know collaborate at the global level and what are the initiative or interest. I think there’s a first of all we know that there’s a UN you know process going on in terms of the scientific panels and also global dialogues. So we probably give some you know opportunities and a little bit trust to them and hold on to see what are the outcome from the UN level. And secondly I think for my experience what really make a difference between the AI governance and internet governance or social media governance is that we learn from our past experience especially you know social media’s experience. So we have such a vibrant discussion you know intervention, earlier intervention or precautional principle you know as we British said and from different stakeholders from civil society, from academia and you know from the industry. So I think in that sense you know we’re more much more precaution than the social media. Internet Errors. So which probably will make a difference. And certainly, the second one is in terms of which area we should look at. From my experience and also the PNAS experience, I agree with Vint. First of all, it’s risk. Risk is very important, you know. And secondly, the safety issues. And of course, liability, because liability is a mechanism we hold the AI developers and the deployments accountable. So that’s very important, I think. The third one, of course, is interoperability. So when we talk about interoperability, it’s not only about a principle, ethics, norms, but also standard, you know. So the standard will play a significant role in regulator AI. And I’m very glad we see a lot of progress in terms of AI standard making. For example, at the EU level, there’s a lot of standard. So they’re going to have a kind of, you know, announcement of the EU standard in terms of AI act. But there’s also standard, huge progress of standard making in China in terms of safety issue or the Africa issue. So I think the AI standard will be one of the very crucial areas for us to regulate AI in the future. I think I’ll stop here. Thank you very much.


Olivier Crepin-Leblond: Thank you very much, Yig-Chan. And there’s two minutes left, I guess, to just ask any of our co-moderators on their reflections. I was going to say one tweet from each one of our participants, but I don’t know if we can do it in the two minutes. Should we try? One tweet? Yeah, why not? Quick tweet. Okay, let’s start with the table then, with the person the furthest to my right, which is your left. Bill Drake?


Luca Belli: A message of hope in 20 seconds.


William Drake: A message of hope in 20 seconds. Wow.


Luca Belli: Or of disgrace, as you prefer.


William Drake: I was going to say abandon all hope. All right. Well, I just echo again the point about being very clear about exactly. What demand is there for what kind of governance over what kinds of processes? Too much of the discussion around these issues is just too generic and high level to be very meaningful when we get down to the real nitty-gritty of what’s going on in different domains of AI development and application, and so we need a dose of realism there. But I like the idea of the mapping effort that you’re trying to do, and I look forward to seeing you guys develop more.


Olivier Crepin-Leblond: Thank you, Bill. Next, Weixin.


Shuyan Wu: Okay, thank you. It’s my first time to attend this kind of discussion, and it’s very important to share my opinions and discuss with all of you. I hope I have another chance to continue to exchange our ideas with all of you. Thank you. Thank you.


Hadia Elminiawi: Regional and international strategies and cooperations should not be seen as conflicting with national sovereignties. National and international strategies, cooperations and collaboration should go in parallel and hand-in-hand. They should support and strengthen the goal of one another. They need to have aligned objectives and implemented simultaneously.


Olivier Crepin-Leblond: Thank you, Hadia. Sandrine.


Sandrine ELMI HERSI: Yes, so we can no longer think AI governance and internet governance as separate entities. As we noted today, the strong interlinks between LLMs and internet content and services, thus applying internet core principles to AI, is not a whim or accessory. It is the only way to preserve the openness and richness of the internet we spend years to build, and we can and must act now to establish a multi-stakeholder approach with that in mind.


Olivier Crepin-Leblond: Thank you. Renata.


Renata Mielli: Just three words. How to transform these principles into technical standards, we talked about this, and I want to say we need oversight, agency and regulation, and we need to remember that governance and regulation are two different things, and governance needs to be multistakeholder, and we need national regulations for AI systems.


Olivier Crepin-Leblond: Thank you, Roxana.


Roxana Radu: I’ll just say that we need to walk the talk, so now that we’ve done this initial brainstorming session, I look forward to seeing what we can come up together in terms of bridging this gap between what we’ve learned in internet governance, and where we’re starting in AI discussions. This is not to say that everything applies, but we’ve learned a lot, and we shouldn’t reinvent the wheel.


Olivier Crepin-Leblond: Thank you. And finally, Vint.


Vint Cerf: I think my summary here is very simple. We just have to make sure that when we build these systems, we take safety in mind for all of the users. That’s going to take a concerted effort for all of us.


Olivier Crepin-Leblond: Thank you very much, and if anybody in the room is interested in continuing this discussion, which I hope you are after this session, then please come over to the stage and share your details with us. You can get onto the DC’s mailing lists and continue the discussion and participate in future such work. Thank you.


P

Pari Esfandiari

Speech speed

127 words per minute

Speech length

599 words

Speech time

282 seconds

Internet’s foundational principles of openness, decentralization, and transparency should guide AI governance as generative AI becomes a main gateway to content

Explanation

Pari argues that as generative AI increasingly serves as the primary access point for online content, the core values that made the internet successful – being global, interoperable, open, decentralized, end-to-end, robust and reliable – should be applied to govern AI systems. She emphasizes that these were deliberate design choices that made the internet a global common for innovation and human agency.


Evidence

References the internet’s core values: global, interoperable, open, decentralized, end-to-end, robust and reliable, and freedom from harm as deliberate design choices


Major discussion point

Bridging Internet Core Values and AI Governance


Topics

Infrastructure | Legal and regulatory


Fundamental network neutrality principles such as generativity and competition on a level playing field should apply to AI infrastructure, AI models, and content creation

Explanation

Pari presents this as one of the two overarching questions for the session, suggesting that the principles ensuring fair competition and innovation in internet infrastructure should be extended to AI systems. This includes ensuring that AI infrastructure and models maintain the same level playing field that network neutrality provides for internet services.


Evidence

Presented as one of two main questions guiding the session discussion


Major discussion point

Applying Network Neutrality Principles to AI


Topics

Infrastructure | Legal and regulatory


L

Luca Belli

Speech speed

159 words per minute

Speech length

1229 words

Speech time

463 seconds

Internet and AI are “two different beasts” – Internet built on open, decentralized architecture while AI is highly centralized and proprietary

Explanation

Luca emphasizes the fundamental architectural differences between the internet and AI systems. While the internet was built on open, decentralized, transparent, and interoperable principles that enabled its success over 50 years, AI operates through highly centralized, proprietary, and often opaque systems controlled by few companies.


Evidence

References Vint Cerf’s expression from previous year and contrasts internet’s 50+ year success with current AI centralization trends


Major discussion point

Bridging Internet Core Values and AI Governance


Topics

Infrastructure | Legal and regulatory


H

Hadia Elminiawi

Speech speed

122 words per minute

Speech length

721 words

Speech time

351 seconds

Core internet values like openness, interoperability, and neutrality are appearing in various AI governance strategies globally

Explanation

Hadia observes that the fundamental principles that shaped the internet are being incorporated into AI governance frameworks worldwide. She notes that various international strategies and regulatory approaches are adopting these core principles as foundational elements for AI governance.


Evidence

References EU’s AI Act (2024), US Executive Order for AI Leadership (January 2025), UK Framework for AI Regulation, G7’s Guiding Principles and Code of Conduct (2023), China’s AI rules, Egypt’s National AI Strategy (2025), and African Union Continental AI Strategy (July 2024)


Major discussion point

Bridging Internet Core Values and AI Governance


Topics

Legal and regulatory | Infrastructure


Complete openness of AI models may be unrealistic given capital investment and could discourage innovation and raise security concerns

Explanation

Hadia questions whether requiring full transparency and open-source access to AI models is practical or desirable. She argues that given the massive capital investments in AI development, complete openness could discourage investment and destroy economic value, while also raising ethical and security concerns about unrestricted access to potentially harmful tools.


Evidence

Points to the substantial capital investment in AI models and security risks of unrestricted access to tools that could be used for weapons or harmful actions


Major discussion point

AI Transparency and Explainability Challenges


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Sandrine ELMI HERSI
– Renata Mielli
– Vint Cerf

Agreed on

Need for AI transparency and explainability to address opacity challenges


Disagreed with

– Sandrine ELMI HERSI

Disagreed on

Feasibility of complete AI transparency and openness


Alternative solutions like requiring open-source safety guardrails rather than full model transparency should be considered

Explanation

As an alternative to complete model transparency, Hadia suggests that AI developers could be required to implement and make public their safety guardrails and protective measures. This approach would provide transparency about safety measures without exposing the entire model architecture.


Evidence

Suggests requiring AI developers to implement robust safety guardrails and publish information about these safety measures


Major discussion point

AI Transparency and Explainability Challenges


Topics

Legal and regulatory | Cybersecurity


Regional and international AI strategies should align with and strengthen national sovereignties rather than conflict with them

Explanation

Hadia emphasizes that international cooperation and national sovereignty in AI governance should be complementary rather than competing approaches. She argues that national and international strategies should have aligned objectives and be implemented simultaneously to support each other’s goals.


Major discussion point

Global South Perspectives and Digital Divide


Topics

Legal and regulatory | Development


R

Roxana Radu

Speech speed

144 words per minute

Speech length

504 words

Speech time

208 seconds

Internet governance experience over 30 years provides mature framework for applying values to technical, policy and legal standards that AI governance lacks

Explanation

Roxana highlights the significant difference in maturity between internet governance and AI governance discussions. While internet governance has spent decades not just identifying core values but actually implementing and embedding them into practical standards and practices, AI governance is still primarily focused on identifying ethical principles without the same level of practical application.


Evidence

Contrasts 30+ years of internet governance development with current early-stage AI ethics discussions


Major discussion point

Bridging Internet Core Values and AI Governance


Topics

Legal and regulatory | Infrastructure


W

William Drake

Speech speed

175 words per minute

Speech length

1224 words

Speech time

418 seconds

Need to define precisely what aspects of AI require governance rather than applying generic high-level principles

Explanation

William argues that the AI field is too vast and diverse to apply broad governance principles uniformly across all applications. He emphasizes the need for careful investigation and mapping to determine which internet properties apply generally versus in specific contexts, rather than making assumptions about universal applicability.


Evidence

Points to the unlimited range of AI applications from medicine to environment and suggests need for close investigation and mapping


Major discussion point

Bridging Internet Core Values and AI Governance


Topics

Legal and regulatory


Private actors’ material interests make them unlikely to embrace externally originated constructs like neutrality and transparency

Explanation

William expresses skepticism about major AI companies voluntarily adopting governance frameworks that don’t align with their immediate profitability goals. He argues that these companies have demonstrated they will prioritize their business interests over external governance constructs, making multilateral cooperation challenging.


Evidence

References companies’ demonstrated behavior of prioritizing business interests, including ‘sponsoring military parades for dear leaders in Washington’


Major discussion point

Market Concentration and Gatekeeping Issues


Topics

Economic | Legal and regulatory


Must identify where there’s actual functional demand for international governance rather than assuming need based on technology existence

Explanation

William warns against assuming that new technology automatically creates demand for governance arrangements. He argues that successful international governance requires clear functional needs for coordination or harmonization, using historical examples from telecommunications where technical requirements drove cooperation.


Evidence

Uses historical examples of radio frequency spectrum and telecom network interconnection where technical necessity drove international cooperation


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory


Disagreed with

– Andrew Campling

Disagreed on

Starting point for AI governance frameworks


Multilateral regulatory interventions face political obstacles, and binding international agreements may be unrealistic

Explanation

William points to current political realities that make international AI governance challenging, particularly noting that net neutrality is now prohibited in the US and that the G77 and China are demanding binding commitments from UN processes. He questions the feasibility of negotiating binding international AI agreements in the current political climate.


Evidence

Notes that net neutrality is ‘verboten in the United States now’ and references G77 and China’s demands for binding international commitments from UN AI processes


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory


Disagreed with

– Yik Chan Ching

Disagreed on

Optimism vs pessimism about multilateral AI governance


S

Sandrine ELMI HERSI

Speech speed

125 words per minute

Speech length

791 words

Speech time

378 seconds

Many AI models remain “black boxes” requiring greater openness to research community for auditability and explainability

Explanation

Sandrine argues that despite some progress through sectoral initiatives and codes of conduct, many AI models lack sufficient transparency. She emphasizes that greater openness, particularly to researchers, is essential for improving both the auditability and explainability of AI systems, as well as their efficiency.


Evidence

References ARCEP’s ongoing technical hearings and file testing with data scientists, and notes some progress through sectoral initiatives and codes of conduct


Major discussion point

AI Transparency and Explainability Challenges


Topics

Legal and regulatory


Agreed with

– Renata Mielli
– Vint Cerf
– Hadia Elminiawi

Agreed on

Need for AI transparency and explainability to address opacity challenges


Disagreed with

– Hadia Elminiawi

Disagreed on

Feasibility of complete AI transparency and openness


Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services

Explanation

Sandrine argues that the non-discrimination principle originally applied to prevent ISPs from favoring their own services should now be extended to AI systems. She contends that today’s digital gatekeepers include not just ISPs but also AI systems that can narrow user perspectives and freedom of choice.


Evidence

References ARCEP’s work on assessing extension of non-discrimination principles and draws parallel to original ISP regulation


Major discussion point

Applying Network Neutrality Principles to AI


Topics

Infrastructure | Legal and regulatory


Disagreed with

– Renata Mielli

Disagreed on

Direct applicability of net neutrality principles to AI


Need to preserve plurality of economic players’ access to key inputs for AI development including data, computing resources, and energy

Explanation

Sandrine emphasizes the importance of maintaining competitive AI markets by ensuring diverse economic actors can access essential resources for AI development. This includes not just data but also the computational power and energy resources necessary for training and running AI models.


Evidence

References ARCEP’s investigation into preserving openness of AI markets


Major discussion point

Market Concentration and Gatekeeping Issues


Topics

Economic | Infrastructure


Need to ensure diversity of content when AI chatbots provide single answers instead of hundreds of web pages

Explanation

Sandrine highlights a fundamental shift in how users access information – from browsing multiple web pages to receiving single AI-generated responses. She argues this change requires ensuring that AI systems don’t simply amplify dominant sources but remain open to smaller and independent content creators.


Evidence

Contrasts traditional web search results (hundreds of pages) with AI chatbot responses (single answer)


Major discussion point

Applying Network Neutrality Principles to AI


Topics

Sociocultural | Legal and regulatory


R

Renata Mielli

Speech speed

111 words per minute

Speech length

674 words

Speech time

361 seconds

AI systems need transparency and explainability especially for social impact assessment and compliance processes, unlike the naturally open internet protocols

Explanation

Renata argues that AI governance requires specific principles like transparency and explainability that weren’t as critical for internet governance because the internet was built on naturally open, decentralized protocols developed collaboratively. AI systems, being opaque and centralized, require these additional transparency measures for social impact assessment and compliance.


Evidence

Contrasts internet’s open, decentralized, collaborative protocol development with AI’s opacity and centralization


Major discussion point

AI Transparency and Explainability Challenges


Topics

Legal and regulatory


Agreed with

– Sandrine ELMI HERSI
– Vint Cerf
– Hadia Elminiawi

Agreed on

Need for AI transparency and explainability to address opacity challenges


Net neutrality principles may not directly apply to AI since AI technology itself is not neutral unlike internet infrastructure

Explanation

Renata identifies a fundamental difference between internet infrastructure and AI technology in terms of neutrality. While net neutrality was designed for telecommunications infrastructure that could be neutral, AI technology itself is inherently non-neutral, making direct application of net neutrality principles problematic.


Evidence

Distinguishes between neutrality in telecommunications infrastructure versus the inherent non-neutrality of AI technology


Major discussion point

Applying Network Neutrality Principles to AI


Topics

Infrastructure | Legal and regulatory


Disagreed with

– Sandrine ELMI HERSI

Disagreed on

Direct applicability of net neutrality principles to AI


Need to transform principles into technical standards while distinguishing between governance and regulation

Explanation

Renata emphasizes the practical challenge of moving from high-level principles to implementable technical standards. She stresses the importance of understanding that governance and regulation are different concepts, with governance being multistakeholder while regulation requires national-level legal frameworks.


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory | Infrastructure


V

Vint Cerf

Speech speed

133 words per minute

Speech length

1262 words

Speech time

567 seconds

Provenance of information used by AI agents and references must be available for critical evaluation of outputs

Explanation

Vint emphasizes the critical importance of being able to trace and verify the sources of information used by AI systems. He argues that users need access to the provenance of training data and references to conduct their own critical thinking and evaluation of AI outputs, particularly given concerns about hallucination and counterfactual information.


Evidence

References the problem of AI hallucination and generation of counterfactual output, emphasizing need for critical evaluation capabilities


Major discussion point

AI Transparency and Explainability Challenges


Topics

Legal and regulatory


Agreed with

– Sandrine ELMI HERSI
– Renata Mielli
– Hadia Elminiawi

Agreed on

Need for AI transparency and explainability to address opacity challenges


Agent-to-agent protocols and model context protocols are being developed to ensure interoperability among AI systems

Explanation

Vint describes emerging technical standards (A2A for agent-to-agent interaction and MCP for model context protocol) that aim to create interoperability between AI agents. These protocols are designed to provide clarity and confidence in semantic matching between agents, preventing the kind of information degradation that occurs in the telephone game.


Evidence

Explains A2A (agent-to-agent) and MCP (model context protocol) standards and uses the analogy of the telephone parlor game to illustrate communication degradation risks


Major discussion point

Technical Standards and Interoperability


Topics

Infrastructure | Digital standards


Agreed with

– Yik Chan Ching
– Audience

Agreed on

Importance of technical standards and interoperability for AI systems


Focus should be on risk to users and liability for providers, with high-risk applications requiring higher safety levels

Explanation

Vint advocates for a risk-based approach to AI regulation, where the level of safety requirements corresponds to the potential risk to users. He suggests that high-risk applications like medical diagnosis or financial advice should have stringent safety requirements, with providers demonstrating due diligence to reduce user risk.


Evidence

Provides examples of high-risk applications including medical diagnosis, medical treatment recommendations, and financial advice


Major discussion point

Risk-Based AI Governance Approach


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Yik Chan Ching
– Alejandro Pisanty

Agreed on

Risk-based approach to AI governance with focus on user safety and provider liability


S

Shuyan Wu

Speech speed

120 words per minute

Speech length

483 words

Speech time

239 seconds

China Mobile’s experience shows importance of ensuring equal access, protecting user rights, and bridging digital divides in AI era

Explanation

Shuyan describes China Mobile’s comprehensive approach to digital inclusion, covering infrastructure development (5G networks reaching all villages), user protection (transparent services, fraud prevention), and targeted solutions for vulnerable groups (elderly, minors, rural communities). This experience is being adapted for AI governance to ensure universal access and inclusive benefits.


Evidence

Provides specific examples: China Mobile’s 5G network covering all villages, customized services for elderly and minors, 5G smart education for rural areas, AI-powered fraud detection, and smart village doctor systems


Major discussion point

Global South Perspectives and Digital Divide


Topics

Development | Infrastructure


A

Audience

Speech speed

147 words per minute

Speech length

544 words

Speech time

220 seconds

Need to focus on intersection of AI and internet where AI feeds on web content and produces web content

Explanation

The audience member from W3C suggests that rather than trying to govern all of AI, focus should be on the specific intersections between AI and the internet. This includes how AI systems consume web content for training, produce web content as output, and operate as agents within the web ecosystem.


Evidence

Mentions AI being fed from web content, web content being produced through AI, and AI being used as agents on the web


Major discussion point

Technical Standards and Interoperability


Topics

Infrastructure | Digital standards


Agreed with

– Vint Cerf
– Yik Chan Ching

Agreed on

Importance of technical standards and interoperability for AI systems


Duty of care and precautionary principle should be foundational building blocks for AI governance

Explanation

Andrew Campling argues that instead of starting with internet governance models, AI governance should be built on duty of care requirements and precautionary principles. He suggests these would be more practical and realistic foundations given the commercial realities and dominant players in the AI space.


Evidence

Draws comparison to social media governance challenges with dominant players disinterested in collaborative initiatives


Major discussion point

Risk-Based AI Governance Approach


Topics

Legal and regulatory | Cybersecurity


Disagreed with

– William Drake
– Andrew Campling

Disagreed on

Starting point for AI governance frameworks


A

Alejandro Pisanty

Speech speed

167 words per minute

Speech length

718 words

Speech time

257 seconds

Need to apply existing rules for automated systems and medical devices to AI rather than creating entirely new frameworks

Explanation

Alejandro argues for leveraging existing regulatory frameworks rather than building AI governance from scratch. He suggests that many rules already exist for automated systems, medical devices, and government procurement that can be extended or modified to address AI-specific concerns like discrimination and harm, rather than creating completely new regulatory structures.


Evidence

Provides examples of existing government purchasing rules requiring non-discriminatory systems, medical device regulations for automated systems, and established approaches to handling uncertainty and probability in regulation


Major discussion point

Risk-Based AI Governance Approach


Topics

Legal and regulatory


Agreed with

– Vint Cerf
– Yik Chan Ching

Agreed on

Risk-based approach to AI governance with focus on user safety and provider liability


Y

Yik Chan Ching

Speech speed

144 words per minute

Speech length

500 words

Speech time

207 seconds

Risk assessment, safety issues, and liability mechanisms are crucial for holding AI developers accountable

Explanation

Yik Chan emphasizes three key areas for AI governance based on PNAI’s research: risk assessment as a fundamental approach, safety as a critical concern, and liability as the mechanism to ensure AI developers and deployers remain accountable for their systems’ impacts.


Evidence

References PNAI’s three years of research and reports on liability, interoperability, and environmental protection


Major discussion point

Risk-Based AI Governance Approach


Topics

Legal and regulatory


Agreed with

– Vint Cerf
– Alejandro Pisanty

Agreed on

Risk-based approach to AI governance with focus on user safety and provider liability


AI standards development is progressing significantly in EU, China, and other regions, particularly around safety issues

Explanation

Yik Chan highlights the substantial progress being made in AI standardization efforts globally, with particular emphasis on safety standards. He notes that standards will play a crucial role in AI regulation and points to developments in multiple jurisdictions including the EU’s AI Act standards and China’s safety-focused standards.


Evidence

References EU AI Act standards announcements and China’s progress on safety and other AI-related standards


Major discussion point

Technical Standards and Interoperability


Topics

Infrastructure | Digital standards


Agreed with

– Vint Cerf
– Audience

Agreed on

Importance of technical standards and interoperability for AI systems


Early intervention and precautionary approaches in AI governance benefit from lessons learned from social media governance failures

Explanation

Yik Chan argues that the AI governance community is more prepared than previous technology governance efforts because of lessons learned from social media. He suggests that having vibrant discussions and early intervention from multiple stakeholders (civil society, academia, industry) represents a more precautionary approach than was taken with social media.


Evidence

Contrasts current multi-stakeholder AI discussions with past social media governance approaches


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory


Disagreed with

– William Drake

Disagreed on

Optimism vs pessimism about multilateral AI governance


O

Olivier Crepin-Leblond

Speech speed

144 words per minute

Speech length

774 words

Speech time

321 seconds

Interactive multi-stakeholder sessions are essential for effective governance discussions on bridging internet and AI governance

Explanation

Olivier emphasizes the importance of creating interactive forums where diverse speakers can present different angles on complex governance topics, followed by broader community discussion. He advocates for inclusive participation where attendees can join the discussion table and contribute to the dialogue.


Evidence

Organizes joint session between Dynamic Coalition on Core Internet Values and Dynamic Coalition on Network Neutrality with multiple speakers and open floor discussion


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory


Time constraints require focused and efficient discussion formats to address complex governance challenges

Explanation

Olivier recognizes that meaningful governance discussions must balance thoroughness with practical time limitations. He structures the session to maximize productive dialogue while acknowledging the need to move efficiently through different perspectives and community input.


Evidence

Notes having only 75 minutes for the session and manages time allocation between speakers, commenters, and open discussion


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory


Continued engagement beyond formal sessions is crucial for advancing governance frameworks

Explanation

Olivier emphasizes that meaningful governance work extends beyond individual sessions and requires ongoing collaboration through established channels. He encourages participants to maintain engagement through mailing lists and future collaborative work to build on the discussions initiated during formal meetings.


Evidence

Invites participants to join DC mailing lists and continue discussions, emphasizing the importance of ongoing participation in future work


Major discussion point

Governance Implementation Challenges


Topics

Legal and regulatory


Agreements

Agreement points

Risk-based approach to AI governance with focus on user safety and provider liability

Speakers

– Vint Cerf
– Yik Chan Ching
– Alejandro Pisanty

Arguments

Focus should be on risk to users and liability for providers, with high-risk applications requiring higher safety levels


Risk assessment, safety issues, and liability mechanisms are crucial for holding AI developers accountable


Need to apply existing rules for automated systems and medical devices to AI rather than creating entirely new frameworks


Summary

Multiple speakers converged on the importance of implementing risk-based governance frameworks that prioritize user safety and establish clear liability mechanisms for AI providers, particularly for high-risk applications like medical diagnosis and financial advice.


Topics

Legal and regulatory | Cybersecurity


Need for AI transparency and explainability to address opacity challenges

Speakers

– Sandrine ELMI HERSI
– Renata Mielli
– Vint Cerf
– Hadia Elminiawi

Arguments

Many AI models remain “black boxes” requiring greater openness to research community for auditability and explainability


AI systems need transparency and explainability especially for social impact assessment and compliance processes, unlike the naturally open internet protocols


Provenance of information used by AI agents and references must be available for critical evaluation of outputs


Complete openness of AI models may be unrealistic given capital investment and could discourage innovation and raise security concerns


Summary

Speakers agreed that AI systems require significantly more transparency than current implementations provide, though they acknowledged practical challenges in achieving complete openness due to investment and security concerns.


Topics

Legal and regulatory


Importance of technical standards and interoperability for AI systems

Speakers

– Vint Cerf
– Yik Chan Ching
– Audience

Arguments

Agent-to-agent protocols and model context protocols are being developed to ensure interoperability among AI systems


AI standards development is progressing significantly in EU, China, and other regions, particularly around safety issues


Need to focus on intersection of AI and internet where AI feeds on web content and produces web content


Summary

There was strong agreement on the critical role of developing technical standards for AI interoperability, with recognition of ongoing global efforts in standardization and the need to focus on AI-internet intersections.


Topics

Infrastructure | Digital standards


Similar viewpoints

These speakers shared the view that while internet and AI are fundamentally different architectures, the core principles that made the internet successful should be adapted and applied to AI governance, particularly through extending network neutrality concepts.

Speakers

– Luca Belli
– Pari Esfandiari
– Sandrine ELMI HERSI

Arguments

Internet and AI are “two different beasts” – Internet built on open, decentralized architecture while AI is highly centralized and proprietary


Internet’s foundational principles of openness, decentralization, and transparency should guide AI governance as generative AI becomes a main gateway to content


Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services


Topics

Infrastructure | Legal and regulatory


Both speakers expressed skepticism about the feasibility of applying internet governance models to AI, emphasizing the need for more pragmatic approaches that account for commercial realities and dominant market players.

Speakers

– William Drake
– Andrew Campling

Arguments

Private actors’ material interests make them unlikely to embrace externally originated constructs like neutrality and transparency


Duty of care and precautionary principle should be foundational building blocks for AI governance


Topics

Legal and regulatory | Economic


Both speakers emphasized the importance of inclusive AI development that bridges digital divides while respecting national approaches, with focus on ensuring benefits reach underserved populations.

Speakers

– Hadia Elminiawi
– Shuyan Wu

Arguments

Regional and international AI strategies should align with and strengthen national sovereignties rather than conflict with them


China Mobile’s experience shows importance of ensuring equal access, protecting user rights, and bridging digital divides in AI era


Topics

Development | Legal and regulatory


Unexpected consensus

Limitations of direct application of internet governance principles to AI

Speakers

– Renata Mielli
– William Drake
– Hadia Elminiawi

Arguments

Net neutrality principles may not directly apply to AI since AI technology itself is not neutral unlike internet infrastructure


Need to define precisely what aspects of AI require governance rather than applying generic high-level principles


Complete openness of AI models may be unrealistic given capital investment and could discourage innovation and raise security concerns


Explanation

Despite the session’s goal of bridging internet and AI governance, there was unexpected consensus among speakers from different backgrounds that direct application of internet principles to AI faces significant practical and conceptual limitations.


Topics

Legal and regulatory | Infrastructure


Importance of leveraging existing regulatory frameworks rather than creating entirely new ones

Speakers

– Alejandro Pisanty
– Roxana Radu
– Yik Chan Ching

Arguments

Need to apply existing rules for automated systems and medical devices to AI rather than creating entirely new frameworks


Internet governance experience over 30 years provides mature framework for applying values to technical, policy and legal standards that AI governance lacks


Early intervention and precautionary approaches in AI governance benefit from lessons learned from social media governance failures


Explanation

There was unexpected agreement across speakers that AI governance should build upon existing regulatory experience and frameworks rather than starting from scratch, representing a pragmatic approach to governance development.


Topics

Legal and regulatory


Overall assessment

Summary

The discussion revealed significant agreement on the need for risk-based AI governance, transparency requirements, and technical standards development, while acknowledging fundamental challenges in directly applying internet governance principles to AI systems.


Consensus level

Moderate to high consensus on core governance needs (safety, transparency, standards) but significant disagreement on implementation approaches and the applicability of internet governance models. This suggests that while there is shared understanding of AI governance challenges, the path forward requires careful consideration of AI’s unique characteristics rather than simple adaptation of existing frameworks.


Differences

Different viewpoints

Feasibility of complete AI transparency and openness

Speakers

– Hadia Elminiawi
– Sandrine ELMI HERSI

Arguments

Complete openness of AI models may be unrealistic given capital investment and could discourage innovation and raise security concerns


Many AI models remain “black boxes” requiring greater openness to research community for auditability and explainability


Summary

Hadia questions whether requiring full transparency and open-source access to AI models is practical given massive capital investments and security risks, while Sandrine advocates for greater openness particularly to researchers for auditability purposes


Topics

Legal and regulatory | Cybersecurity


Direct applicability of net neutrality principles to AI

Speakers

– Renata Mielli
– Sandrine ELMI HERSI

Arguments

Net neutrality principles may not directly apply to AI since AI technology itself is not neutral unlike internet infrastructure


Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services


Summary

Renata argues that net neutrality cannot be directly applied to AI because AI technology is inherently non-neutral, while Sandrine advocates for extending non-discrimination principles from network neutrality to AI systems


Topics

Infrastructure | Legal and regulatory


Starting point for AI governance frameworks

Speakers

– William Drake
– Andrew Campling

Arguments

Must identify where there’s actual functional demand for international governance rather than assuming need based on technology existence


Duty of care and precautionary principle should be foundational building blocks for AI governance


Summary

William emphasizes the need to identify functional demand for governance before creating frameworks, while Andrew advocates for starting with duty of care and precautionary principles as foundational elements


Topics

Legal and regulatory


Optimism vs pessimism about multilateral AI governance

Speakers

– William Drake
– Yik Chan Ching

Arguments

Multilateral regulatory interventions face political obstacles, and binding international agreements may be unrealistic


Early intervention and precautionary approaches in AI governance benefit from lessons learned from social media governance failures


Summary

William expresses pessimism about the feasibility of multilateral AI governance given current political realities, while Yik Chan is more optimistic about early intervention approaches based on lessons learned from social media


Topics

Legal and regulatory


Unexpected differences

Neutrality of AI technology itself

Speakers

– Renata Mielli
– Sandrine ELMI HERSI

Arguments

Net neutrality principles may not directly apply to AI since AI technology itself is not neutral unlike internet infrastructure


Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services


Explanation

This disagreement is unexpected because both speakers come from regulatory/governance backgrounds and might be expected to align on extending internet governance principles to AI, but they fundamentally disagree on whether AI’s inherent non-neutrality prevents direct application of net neutrality principles


Topics

Infrastructure | Legal and regulatory


Feasibility of international AI governance

Speakers

– William Drake
– Hadia Elminiawi

Arguments

Private actors’ material interests make them unlikely to embrace externally originated constructs like neutrality and transparency


Regional and international AI strategies should align with and strengthen national sovereignties rather than conflict with them


Explanation

This disagreement is unexpected given both speakers’ extensive experience in international governance – William’s pessimism about private sector cooperation contrasts sharply with Hadia’s optimism about aligning international and national strategies


Topics

Legal and regulatory | Economic


Overall assessment

Summary

The main areas of disagreement center on the practical implementation of AI governance principles, the extent of transparency required, the applicability of existing internet governance frameworks to AI, and the feasibility of international cooperation


Disagreement level

Moderate to high disagreement level with significant implications – while speakers generally agree on the importance of applying internet values to AI governance, they fundamentally disagree on how to achieve this, suggesting that developing consensus on AI governance frameworks will require substantial additional work to bridge these conceptual and practical differences


Partial agreements

Partial agreements

Similar viewpoints

These speakers shared the view that while internet and AI are fundamentally different architectures, the core principles that made the internet successful should be adapted and applied to AI governance, particularly through extending network neutrality concepts.

Speakers

– Luca Belli
– Pari Esfandiari
– Sandrine ELMI HERSI

Arguments

Internet and AI are “two different beasts” – Internet built on open, decentralized architecture while AI is highly centralized and proprietary


Internet’s foundational principles of openness, decentralization, and transparency should guide AI governance as generative AI becomes a main gateway to content


Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services


Topics

Infrastructure | Legal and regulatory


Both speakers expressed skepticism about the feasibility of applying internet governance models to AI, emphasizing the need for more pragmatic approaches that account for commercial realities and dominant market players.

Speakers

– William Drake
– Andrew Campling

Arguments

Private actors’ material interests make them unlikely to embrace externally originated constructs like neutrality and transparency


Duty of care and precautionary principle should be foundational building blocks for AI governance


Topics

Legal and regulatory | Economic


Both speakers emphasized the importance of inclusive AI development that bridges digital divides while respecting national approaches, with focus on ensuring benefits reach underserved populations.

Speakers

– Hadia Elminiawi
– Shuyan Wu

Arguments

Regional and international AI strategies should align with and strengthen national sovereignties rather than conflict with them


China Mobile’s experience shows importance of ensuring equal access, protecting user rights, and bridging digital divides in AI era


Topics

Development | Legal and regulatory


Takeaways

Key takeaways

Internet’s foundational principles of openness, decentralization, and transparency can serve as signposts for AI governance, but require active adaptation since Internet and AI are ‘two different beasts’


AI governance faces fundamental tension between Internet’s open, distributed architecture and AI’s centralized, proprietary model controlled by few actors


Risk-based approach to AI governance should focus on user safety and provider liability, with high-risk applications requiring higher safety standards


Transparency and explainability are essential for AI systems but complete openness may be unrealistic due to investment concerns and security risks


Network neutrality principles of non-discrimination should extend to AI infrastructure and content curation to preserve diversity and prevent gatekeeping


Technical standards and interoperability protocols (like agent-to-agent and model context protocols) will be crucial for AI governance implementation


Global South perspectives and capabilities must be included in AI governance discussions to address existing asymmetries


AI governance should build on 30 years of Internet governance experience rather than starting from scratch, while recognizing what doesn’t directly apply


Multi-stakeholder governance approach is essential, but private actors’ commercial interests may limit participation in voluntary international standards


Resolutions and action items

Continue discussion through Dynamic Coalition mailing lists for interested participants


Develop detailed mapping matrix of which Internet properties apply to specific AI contexts and applications


ARCEP (French regulator) to complete ongoing technical report on applying Internet core values to AI governance


Focus on intersection points between AI and Internet rather than trying to govern all AI applications generically


Explore alternative transparency solutions like requiring open-source safety guardrails rather than full model openness


Unresolved issues

How to define and regulate new AI gatekeepers when traditional Internet governance models may not apply


Whether complete AI model transparency is realistic or desirable given investment requirements and security concerns


How to ensure meaningful participation of major AI companies in voluntary international governance frameworks


What specific aspects of AI actually require international coordination versus national regulation


How to balance innovation incentives with transparency and accountability requirements


Whether binding international AI agreements are feasible given current political climate


How to transform high-level principles into actionable technical standards and regulatory frameworks


How to address liability and responsibility in multi-agent AI systems


What constitutes functional demand for AI governance versus assumed need based on technology existence


Suggested compromises

Require open-source safety guardrails and published safety measures rather than full AI model transparency


Apply layered safeguards approach with AI algorithms monitoring other AI algorithms for responsible use


Focus on risk-based regulation where high-risk applications have stricter requirements rather than blanket AI rules


Extend existing regulatory frameworks (medical devices, purchasing rules) to AI applications rather than creating entirely new governance structures


Pursue sector-specific AI governance approaches rather than generic cross-cutting regulations


Combine national AI regulations with aligned international cooperation strategies that support rather than conflict with sovereignty


Start with duty of care and precautionary principles as foundational building blocks rather than comprehensive Internet governance models


Thought provoking comments

The Internet and AI are two different beasts. So we are speaking about two things that are two digital phenomenon, but they are quite different. And the Internet, as Pari was reminding us very eloquently, has been built on an open, decentralized, transparent, interoperable architecture that made the success of the Internet over the past 70 years… but the question here is how we reconcile this with a highly centralized AI architecture.

Speaker

Luca Belli


Reason

This comment crystallized the fundamental tension at the heart of the discussion – the architectural incompatibility between the internet’s foundational principles and AI’s current development trajectory. It moved beyond surface-level comparisons to identify the core structural challenge.


Impact

This framing established the central problematic that all subsequent speakers had to grapple with. It shifted the discussion from whether internet values could apply to AI, to how they could be reconciled with AI’s inherently different architecture. This tension became a recurring theme throughout the session.


Every time someone interacts with one of those [large language models], they are specializing it to their interests and their needs. So in a sense, we have a very distributed ability to adapt a particular large language model to a particular problem… And that’s important, the fact that we are able to personalize.

Speaker

Vint Cerf


Reason

This insight reframed AI from being purely centralized to having distributed elements through user interaction. It challenged the binary view of centralized vs. decentralized systems and introduced nuance about how users can maintain agency even within centralized AI systems.


Impact

This comment provided a counterpoint to concerns about AI centralization and influenced later discussions about user agency and the potential for maintaining some internet-like distributed characteristics in AI systems. It offered a more optimistic perspective on preserving user empowerment.


Is it realistic or even desirable to expect that all AI models be made fully open source? Given the amount of capital investment in these models, requiring complete openness could discourage investment in AI models, destroying a lot of economic value and hindering innovation… Is it truly responsible or logical to allow unrestricted access to tools that could be used to build weapons or plan harmful disruptive actions?

Speaker

Hadia Elminiawi


Reason

This comment introduced crucial practical and ethical constraints that challenge idealistic applications of internet openness principles to AI. It forced the discussion to confront real-world trade-offs between values like openness and safety/security concerns.


Impact

This intervention shifted the conversation from theoretical principle-mapping to practical implementation challenges. It introduced the concept of ‘layered safeguards’ and sparked discussion about alternative approaches to transparency that don’t require full openness, influencing the overall tone toward more pragmatic solutions.


What we’ve done in internet governance over the last 30 years is much more than identifying core values. We apply them, we’ve embedded them into core practices, and we are continuing to refine these practices day by day… With AI, there seems to be a preference for unilateral standards, the giants developing their own standards, sharing them through APIs, versus globally negotiated standards.

Speaker

Roxana Radu


Reason

This comment highlighted a critical difference in governance maturity and approach between internet and AI governance. It identified the shift from collaborative standard-setting to unilateral corporate control as a key challenge, moving beyond principles to examine governance processes themselves.


Impact

This observation redirected attention from what principles to apply to how governance processes differ between domains. It influenced subsequent discussions about stakeholder participation and the challenges of bringing AI companies to collaborative governance tables.


We simply can’t just assume, because the technology is there and the issues are there, that there’s a functional demand [for international governance]. You know, often people point to things and say, oh, there’s some new phenomena. We must have governance arrangements. But very often the demand for governance arrangements is not equally distributed across actors.

Speaker

William Drake


Reason

This comment challenged a fundamental assumption underlying the entire session – that AI governance is necessarily needed or wanted by key stakeholders. It introduced a dose of political realism about power dynamics and incentives that was largely absent from earlier idealistic discussions.


Impact

This intervention served as a reality check that sobered the discussion. It forced participants to consider not just what governance should look like, but whether it’s actually achievable given current power structures. This influenced the final discussions toward more pragmatic approaches and acknowledgment of constraints.


If you want to regulate large language models provided over the internet for chatbots… Why would OpenAI, Google, Meta, et cetera… why would they come together and agree to limit themselves in some way? Also to sit at the table with people who are their users or their clients, potentially their competitors if something arises from their innovation.

Speaker

Alejandro Pisanty


Reason

This comment cut to the heart of the governance challenge by questioning the fundamental incentive structures. It moved beyond technical and ethical considerations to examine the political economy of AI governance, highlighting why voluntary cooperation might be unrealistic.


Impact

This comment reinforced the realist turn in the discussion initiated by Drake and others. It contributed to a more sober assessment of governance possibilities and influenced the final recommendations toward focusing on areas where there might be actual incentives for cooperation, such as liability and risk management.


Overall assessment

These key comments fundamentally shaped the discussion by introducing increasing levels of realism and complexity. The session began with an optimistic framing about mapping internet values to AI governance, but these interventions progressively challenged assumptions, introduced practical constraints, and highlighted structural differences between the domains. The comments created a dialectical progression from idealism to realism, ultimately leading to more nuanced and pragmatic conclusions. Rather than simply advocating for applying internet principles to AI, the discussion evolved to acknowledge the fundamental tensions, power dynamics, and implementation challenges involved. This resulted in a more sophisticated understanding of the governance challenge and more realistic recommendations focused on specific areas like risk management, liability, and targeted interventions rather than wholesale principle transfer.


Follow-up questions

How do we define who are the new gatekeepers in AI and how to implement laws that may not even exist yet to regulate them?

Speaker

Luca Belli


Explanation

This addresses the fundamental challenge of identifying control points in AI systems and developing appropriate regulatory frameworks, which is crucial for applying internet governance principles to AI


What alternative solutions can we consider for AI transparency beyond making all models fully open source?

Speaker

Hadia Elminiawi


Explanation

This explores practical approaches to transparency that balance openness with security concerns and investment protection, which is essential for developing workable AI governance frameworks


How can we develop a detailed matrix mapping which internet properties apply generally or in specific AI contexts?

Speaker

William Drake


Explanation

This would provide a systematic framework for understanding how internet governance principles can be applied across different AI applications and contexts


What aspects of AI processes absolutely require international coordination or harmonization?

Speaker

William Drake


Explanation

This is critical for determining where international governance efforts should focus and where there is genuine functional demand for coordination


How do we bring different stakeholders, especially dominant AI companies, to the table for governance discussions?

Speaker

Alejandro Pisanty


Explanation

This addresses the practical challenge of creating incentives for major AI players to participate in governance frameworks that may limit their operations


How can we establish indelible ways to identify sources of content used to train AI models?

Speaker

Vint Cerf


Explanation

This is important for establishing provenance and accountability in AI systems, which is fundamental to trust and liability frameworks


How do existing web standards and expectations for user agents apply to AI-based agents?

Speaker

Dominique Hazel Monsieur


Explanation

This explores how established internet protocols and standards can be extended to govern AI agents operating on the web


How can we transform AI governance principles into technical standards?

Speaker

Renata Mielli


Explanation

This addresses the practical implementation challenge of moving from high-level principles to actionable technical specifications


What does ensuring transparent algorithms mean in practical terms for AI systems?

Speaker

Hadia Elminiawi


Explanation

This seeks to define concrete requirements for AI transparency beyond abstract principles


How can we ensure AI systems remain open to smaller and independent content creators rather than just amplifying dominant sources?

Speaker

Sandrine ELMI HERSI


Explanation

This addresses concerns about AI systems potentially concentrating power and reducing diversity in content and innovation


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies

Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies

Session at a glance

Summary

This panel discussion examined the case for local artificial intelligence innovation that serves humanity’s benefit, focusing on three key dimensions: inclusivity, indigeneity, and intentionality. The session was moderated by Valeria Betancourt and featured experts from various organizations discussing how to develop contextually grounded AI that contributes to people and planetary well-being.


Anita Gurumurthy from IT4Change framed the conversation by highlighting the tension between unequal AI capabilities distribution and increasing demands from climate and energy impacts. She emphasized that current AI investment ($200 billion between 2022-2025) is three times global climate adaptation spending, raising concerns about energy consumption and cultural homogenization through Western-centric AI models. The discussion revealed that local AI development faces significant challenges, including limited access to computing infrastructure, data scarcity in local languages, and skills gaps.


Wai Sit Si Thou from UN Trade and Development presented a framework focusing on infrastructure, data, and skills as key drivers for inclusive AI adoption. The presentation emphasized working with locally available infrastructure, community-led data, and simple interfaces while advocating for worker-centric approaches that complement rather than replace human labor. Ambassador Abhishek Singh from India shared practical examples of democratizing AI access through government-subsidized computing infrastructure, crowd-sourced linguistic datasets, and capacity-building initiatives.


Sarah Nicole from Project Liberty Institute argued that AI amplifies existing centralized digital economy structures rather than disrupting them, advocating for radical infrastructure changes that give users data agency through cooperative models and open protocols. The discussion explored various approaches to data governance, including data cooperatives that enable collective bargaining power rather than individual data monetization.


The panelists concluded that developing local AI requires international cooperation, shared computing infrastructure, open-source models, and new frameworks for intellectual property that protect community interests while fostering innovation for the common good.


Keypoints

## Major Discussion Points:


– **Infrastructure and Resource Inequality in AI Development**: The discussion highlighted the significant AI divide, with infrastructure, data, and skills concentrated among few actors. Key statistics showed AI investment doubled to $200 billion between 2022-2025 (three times global climate adaptation spending), and single companies like NVIDIA controlling 90% of critical GPU production.


– **Local vs. Global AI Models and Cultural Preservation**: Participants debated the tension between large-scale global AI systems and the need for contextually grounded, local AI that preserves linguistic diversity and cultural knowledge. The conversation emphasized how current AI systems amplify “epistemic injustices” and western cultural homogenization while erasing local ways of thinking.


– **Data Ownership, Intellectual Property, and Commons**: A significant portion focused on rethinking data ownership models, moving from individual data monetization to collective approaches like data cooperatives. Participants discussed how current IP frameworks may not serve public interest and explored alternatives for fair value distribution from AI development.


– **Infrastructure Sharing and Cooperative Models**: Multiple speakers advocated for shared computing infrastructure (referencing models like CERN) and cooperative approaches to make AI development more accessible to smaller actors, developing countries, and local communities. Examples included India’s subsidized compute access and Switzerland’s supercomputer sharing initiatives.


– **Intentionality and Governance for Common Good**: The discussion emphasized the need for deliberate policy choices to steer AI development toward public benefit rather than purely private value creation, including precautionary principles, public procurement policies, and accountability mechanisms.


## Overall Purpose:


The discussion aimed to explore pathways for developing “local artificial intelligence” that serves humanity’s benefit, particularly focusing on how AI innovation can be made more inclusive, contextually relevant, and aligned with common good rather than concentrated corporate interests. The session sought to identify practical solutions for democratizing AI development and ensuring its benefits reach marginalized communities and developing countries.


## Overall Tone:


The discussion maintained a collaborative and solution-oriented tone throughout, with participants building on each other’s ideas constructively. While speakers acknowledged significant challenges and structural inequalities in current AI development, the tone remained optimistic about possibilities for change. The conversation was academic yet practical, with participants sharing concrete examples and policy recommendations. There was a sense of urgency about addressing these issues, but the overall atmosphere was one of thoughtful problem-solving rather than criticism alone.


Speakers

**Speakers from the provided list:**


– **Valeria Betancourt** – Moderator of the panel session on local artificial intelligence innovation pathways


– **Anita Gurumurthy** – From IT4Change, expert on digital justice and AI democratization


– **Wai Sit Si Thou** – From UN Trade and Development Agency (UNCTAD), participated remotely, expert on inclusive AI for development


– **Abhishek Singh** – Ambassador, Government of India, expert on AI infrastructure and digital governance


– **Sarah Nicole** – From Project Liberty Institute, expert on digital infrastructure and data agency


– **Thomas Schneider** – Ambassador, Government of Switzerland, economist and historian with expertise in digital policy


– **Nandini Chami** – From IT4Change, expert on AI governance and techno-institutional choices


– **Sadhana Sanjay** – Session coordinator managing remote participation and questions


– **Audience** – Various audience members including Dr. Nermin Salim (Secretary General of Creators Union of Arab, expert in intellectual property)


**Additional speakers:**


– **Dr. Nermin Salim** – Secretary General of Creators Union of Arab (consultative status with UN), expert in intellectual property law, specifically AI intellectual property protection


Full session report

# Local Artificial Intelligence Innovation Pathways Panel Discussion


## Introduction and Context


This panel discussion, moderated by Valeria Betancourt, examined pathways for developing local artificial intelligence innovation that serves humanity’s benefit. The session was structured around three key dimensions: inclusivity, indigeneity, and intentionality. Participants included Anita Gurumurthy from IT4Change, Ambassador Abhishek Singh from India, Wai Sit Si Thou from UN Trade and Development, Thomas Schneider (Ambassador from Switzerland), Sarah Nicole from Project Liberty Institute, and Nandini Chami from IT4Change.


The discussion was framed by striking statistics from the UN Digital Economy Report: AI-related investment doubled from $100 to $200 billion between 2022 and 2025, representing three times global spending on climate change adaptation. This established a central tension about democratising AI benefits whilst addressing resource constraints and environmental impact.


## Round One: Inclusivity and the AI Divide


### Infrastructure Inequality


Wai Sit Si Thou highlighted the profound inequalities in AI development capabilities, noting that NVIDIA produces 90% of critical GPUs, creating significant infrastructure barriers. This concentration represents what speakers termed the “AI divide,” where computing resources, data, and skills remain concentrated among few actors.


Anita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues creating environmental concerns. She noted that efficiency gains are being used to build larger models rather than reducing overall environmental impact.


### Shared Infrastructure Solutions


Ambassador Singh presented India’s approach to addressing infrastructure inequality through public investment. India created shared compute infrastructure with government subsidising costs to less than a dollar per GPU per hour, making expensive AI computing resources accessible to smaller actors who cannot afford commercial cloud rates.


Thomas Schneider described similar initiatives including Switzerland’s supercomputer network and efforts to share computing power globally. Multiple speakers endorsed a CERN-like model for AI infrastructure sharing, where pooled resources from multiple countries could provide affordable access to computing power for developing countries and smaller organisations.


### Framework for Inclusive Development


Wai Sit Si Thou presented a framework for inclusive AI adoption based on three drivers: infrastructure, data, and skills, with equity as the central focus. This approach emphasised working with locally available infrastructure, community-led data, and simple interfaces to enable broader adoption.


The framework advocated for worker-centric AI development that complements rather than replaces human labour, addressing concerns about technological unemployment. Solutions should work offline to serve populations without reliable internet access and use simple interfaces to overcome technical barriers.


## Round Two: Indigeneity and Cultural Preservation


### Epistemic Justice and Cultural Homogenisation


Anita Gurumurthy highlighted how current AI development amplifies “epistemic injustices,” arguing that Western cultural homogenisation through AI platforms erases cultural histories and multilingual thinking structures. She noted that large language models extensively use Wikipedia, demonstrating how AI systems utilise commons-based resources whilst privatising benefits.


The discussion revealed tension between necessary pluralism for local contexts and generalised models that dominate market development. Gurumurthy posed the critical question: “We reject the unified global system. But the question is, are these smaller autonomous systems even possible?”


### Preserving Linguistic Diversity


Ambassador Singh provided examples of addressing this challenge through crowd-sourcing campaigns for linguistic datasets. India’s approach involved creating portals where people could contribute datasets in their local languages, demonstrating community-led data collection that supports AI development reflecting linguistic diversity.


Wai Sit Si Thou emphasised that AI solutions must work with community-led data and indigenous knowledge for local contexts, advocating for approaches that complement rather than replace local ways of knowing.


## Round Three: Intentionality and Governance


### Beyond “Move Fast and Break Things”


Nandini Chami presented a critique of Silicon Valley’s “move fast and break things” approach, arguing that the precautionary principle should guide AI development given potential for widespread societal harm. She emphasised that private value creation and public value creation in AI are not automatically aligned, requiring deliberate policy interventions.


Chami highlighted how path dependencies mean AI adoption doesn’t automatically enable economic diversification in developing countries, requiring intentional approaches to ensure public benefit.


### Data Governance and Collective Approaches


Sarah Nicole challenged mainstream thinking about individual data rights, arguing that data gains value when aggregated and contextualised. She advocated for collective approaches through data cooperatives that provide better bargaining power than individual data monetisation schemes.


This contrasted with Ambassador Singh’s examples of marketplace mechanisms where individuals could be compensated for data contributions, citing the Karya company that pays delivery workers for data contribution. Nicole argued that individual data monetisation yields minimal returns and could exploit economically vulnerable populations.


### Democratic Participation


The discussion addressed needs for public participation in AI decision-making beyond addressing harms. Chami argued for meaningful democratic participation in how AI systems are conceptualised, designed, and deployed.


Sarah Nicole supported this through advocating for infrastructure changes that give users voice, choice, and stake in their digital lives through data agency and cooperative ownership models.


## Audience Questions and Intellectual Property


Dr. Nermin Salim raised questions about intellectual property frameworks and platforms for protecting content creators. Timothy asked remotely about IP frameworks and natural legal persons in the context of AI development.


The speakers agreed that current intellectual property frameworks are inadequate for the AI era. Gurumurthy highlighted how trade secrets lock up data needed by public institutions, whilst large language models utilise commons like Wikipedia without fair compensation to contributors.


## Key Areas of Agreement


### Cooperative Models


Speakers demonstrated consensus on the viability of cooperative models for AI governance, with support spanning civil society, government, and international organisations. There was strong agreement on shared infrastructure approaches and resource pooling.


### Community-Led Development


All speakers agreed on the importance of community-led and contextual approaches to AI development, representing a challenge to top-down, technology-driven deployment approaches.


### Need for Reform


Multiple speakers identified problems with existing intellectual property frameworks, agreeing that current regimes inadequately balance private rights with public interest.


## Unresolved Challenges


The discussion left critical questions unresolved, including the fundamental tension between pluralism and generalised models: how can smaller autonomous AI systems be made economically viable against dominant large-language models with scaling advantages?


The complexity of developing concrete metrics for safety, responsibility, and privacy in AI systems beyond “do no harm” principles remains challenging, particularly for establishing accountability across transnational value chains.


## Recommendations


Speakers proposed several concrete actions:


– Establish shared AI infrastructure models pooling resources from multiple countries


– Create global repositories of AI applications in key sectors that can be shared across geographies


– Develop crowd-sourcing campaigns for linguistic datasets to support AI development in minoritised languages


– Implement public procurement policies steering AI development toward human-centric solutions


– Explore data cooperative models enabling collective bargaining power


## Conclusion


This panel discussion revealed both the urgency and complexity of developing local AI innovation pathways serving humanity’s benefit. The speakers demonstrated consensus on the need for alternative approaches prioritising collective organisation, public accountability, and cultural diversity over purely market-driven solutions.


The conversation highlighted that inclusivity, indigeneity, and intentionality must be addressed simultaneously in AI development. However, significant challenges remain in translating shared principles into practical implementation, particularly the tension between necessary pluralism and economic pressures toward centralisation.


The discussion provides foundation for alternative policy approaches emphasising public interest, collective action, and democratic participation in AI governance, opening space for more deliberate, community-controlled approaches to AI development that could better serve diverse human needs whilst respecting resource constraints.


Session transcript

Valeria Betancourt: Welcome, everybody. Thank you so much for your presence here. This session is going to look at the case for local artificial intelligence, innovation pathways to harness AI for benefit of humanity. We have, I have the privilege to moderate this panel today. As the Global Digital Compact underscores, there is an urgent imperative for digital cooperation to harness the power of artificial intelligence, innovation for the benefit of humanity. Evidence so far produced in several parts of the world, particularly in the context of the Global South, increasingly points to the importance of contextually grounded artificial intelligence, innovation for a just and sustainable digital transition. This session is going to look at three dimensions of local artificial intelligence, inclusivity, indigeneity, and intentionality. Our speakers from the expertise and on viewpoints will help us to get a deeper understanding of how these dimensions played out for local AI that is contextual and that contributes to the well-being of people and planet. So I have the pleasure of having Anita Gurumurthy from IT4Change to just help us to frame the conversation that we will have. And I will invite Anita then to come and please frame the conversation, set the ground for and the tone for the conversation.


Anita Gurumurthy: Thank you, thank you, Valeria, and it’s an honor to be part of this panel. So I think the starting point when we look at a just and sustainable digital transition is to reconcile two things. On the one hand, you have an unequal distribution of AI capabilities, and on the other, you actually have, you know, an increasing set of demands owing to climate and energy and the impacts of innovation on a planetary scale. And therefore, the question is, how do we democratize innovation and look at ideas of scale afresh, because the models we have today are on planetary scale. Both the production and consumption of AI innovation need to be cognizant of planetary boundaries. Essentially, then, what is this idea of local AI? Is it different from ideas of localizing AI? Is there a concept such as local AI? Will that even work? I just want to place before you some statistics, and we have a colleague online who will speak about this from UN Trade and Development, from the Digital Economy Report that was brought out by the UN, and I want to quote some statistics. Between 2022 and 2025, AI-related investment doubled from $100 to $200 billion. By comparison, this is about three times the global spending on climate change adaptation. So, we’re investing much more on R&D and for AI and much less on what we need to do to, in many ways, look at the energy question and the water question. Supercomputing chips have enabled some energy efficiency, but market trends suggest that this is not going to make way for building or for developing models differently. It’s going to support bigger, more complex large-language models, in turn mitigating the marginal energy savings. And I’m going to talk a little bit about the future of computing and how it’s going to change the way we do things that are possible because chips are becoming more energy efficient. So the efficiencies in compute are really not necessarily going to translate into some kind of respite for the kind of climate change impacts. Now, I want to give you, you know, this is just for shock value. Energy demand of data centers. And this is a very, very vital concern. We also know that around the world there have been water disputes, you know, because of this. So there is this big conundrum, you know, we do need and we do want small is beautiful models. But are they plausible? Are they probable? And while there is the strong case for diversified local models, I want to really underscore that there are lots of people already working on this. And we have some people, you know, governments that are investing in this. And there are communities that are investing in this. And these are very important because from an anglocentric perspective, you know, we think everything is working well enough. You know, LLMs are doing great for us. Chat GPT is very useful. And certainly so, you know, to some extent. But what we ignore is that there is a western cultural homogenization and these AI platforms amplify epistemic injustices. So we are certainly doing more than excluding non-English speakers. We are changing the way in which we look at the world. We are erasing cultural histories and ways of thinking. So we need to retain the structures of our multilingual societies so those structures allow us to think differently and decolonize scientific advancement and innovation in AI. So how do we build our own computational grammar? And this is a question I think that’s really important. And we reject the unified global system. But the question is, are these smaller autonomous systems even possible? And we do this for minoritized communities, minoritized languages. And the second question is, many of the efforts in this fragmented set of communities are really not able to come together. And perhaps there is a way to bring them in dialogue and enable them to collaborate. So this tension between pluralism that is so necessary and generalized models that seem to be the way, the only way AI models are developing in the market, is this tension is where the sweet spot of investigation actually lies. And with that, I revert back to you.


Valeria Betancourt: Thank you, Anita. Thank you for illustrating also why enabling public accountability is a must in the way in which artificial intelligence is conceptualized, designed, and deployed. Let’s go to the first round of the conversation I mentioned that we will be digging into three dimensions of local AI, inclusivity, indigeneity, and intentionality. The first round will focus on inclusion. And the question for Wai Sit Si Thou, the UN Trade and Development Agency, and also Lynette Wambuka from Global Partnership for Sustainable Development Data is, what are the pathways to AI innovation that are truly inclusive? And how can local communities be real beneficiaries of AI? So let me invite our panelists to please address this initial question. So can we go with Wai Sit Si Thou? Yes. It’s online. It’s remotely. Welcome. Thank you. Thank you very much.


Wai Sit Si Thou: Just to double-check whether you can see my screen and hear me well. Yes. Yes. Okay, perfect. So my sharing will be based on this UNTACF flagship publication that was just released two months ago on the title, Inclusive AI for Development. So I think it fits into the discussion very well. And to begin with, I would just want to highlight three key drivers of AI development over the past decades. And they are infrastructure, data, and skills. And we want to look into the questions of eq-fizziness. We need to focus on these three key elements. Because right now we can see a significant AI divide. For example, in terms of infrastructure, one single company, NVIDIA, actually produced 90% of the GPU, which is a critical component for computing resources. And we witness the same kind of AI divide in data, skills, and also other areas like R&D, patent, scientific publication on AI, etc. So this is the main framework that helps us to dive into the discussion on how to make AI inclusive. And the first message that I have is on the key takeaway to promote inclusive AI adoption. This is featured in our report on many successful AI adoption cases in developing countries. And based on the framework that I just shared, on infrastructure, one very important takeaway is to work around the local available digital infrastructure. Right now, over the world, we have still one third of the publication population without access to the Internet. So some kind of AI solution that is able to work offline would be essential for us to promote this adoption. And that is what I meant by working around the locally available infrastructure. And the second point on data, it is essential to work with community-led data and also indigenous knowledge so we can really focus on the specific problem, on the issue in the local context. And the third key takeaway is the skills that I mentioned. We should use simple interface that help a user to use all this AI solution. And the last one is on partnership, because from what we investigate, many of this AI adoption at the local level, The second message that I have is on the worker-centric approach of AI adoption. From the previous technological evolution, we understand there are four key channels where AI may also impact this productivity and workforce. On the left-hand side, we have on the top left, starting with the automation process that AI could substitute human neighbor. And then on the top right-hand side, we have AI complementing human neighbor. And the other two channels are deepening automation and creating new forms of jobs. And from the previous experience, automation or this technology adoption actually focuses on the left two bubbles, that is, replacing human neighbor. But if we really want to have an inclusive AI adoption that benefits everyone, we should focus on the right-hand side, on how AI can complement human neighbor and creating meaningful new jobs. And with that, we need to focus on three areas of action. The first one is, of course, empowering the workforce that include basic digital literacy to re-skilling and up-skilling, so to make them adapting to this new AI approach of work progress. And the second very important point is what I also mentioned before, with the engagement with the worker. So we work with the community, we work with the workers, with the design and implementation of AI to make sure that you fit the purpose and also gain the trust of this whole AI adoption process. And the last point is about fostering the development of human-centric AI solution. That would be the major responsibility of the government through our endeavoring public procurement. and other tax and credit incentives that steer this AI adoption to an inclusive and worker-centric approach. And the last thing that I want to highlight is at the global level, there are also four key areas that we can work on. As Anita mentioned, accountability is key. What we want to advocate here is to have a public disclosure accountability mechanism that could reference the ESG reporting framework that is really mature nowadays in the private sector. So an AI equivalent could happen with public disclosure on how this AI works and its potential impact. So this is the accountability period. And second one is on digital infrastructure. To provide equitable access to AI infrastructure, a very useful model that we can learn from is the CERN model, which is the world’s largest particle physics laboratory right here in Geneva that I am working at. And this model could help pool the resources to provide shared infrastructure for every stakeholder. And the third one is on open innovation, including open data and open source that can really democratize long-range resources for AI innovation. And what we need is to coordinate all these fomented resources for better sharing and better standard. And the last point that I want to highlight is on capacity building. We think that an AI-focused center and network model after the UN-CIMEC technology center and network could help in this regard to provide the necessary technology support and capacity building to developing countries. And of course, the self-serve cooperation could help us address common challenges. Like in East Africa, Rwanda may not have enough data source to change AI with the local language of Swahili. But grouping the East African countries together, then And we can put this Swahili common language in the region to have better AI training. So these are some of our recommendations that I have, and I am happy to engage in further discussion.


Valeria Betancourt: Thank you. Thank you. Thank you very much, Jackie. So obviously, multidimensional approach is needed for the dividends of AI to be distributed equally. With that, I would like to give the floor to Lynette Wamboka, Global Partnership for Sustainable Development Data, to also help us to… It’s not here. It’s not here? Yeah. OK, sorry. So is anyone in the panel willing to contribute to this part of the conversation in relation to how to bring the benefits of AI to local communities before we move to the other round? OK, if not, we can check whether there are any reactions from the remote participants, any questions in relation to this point, or from here from the audience. You are also welcome to comment and provide your viewpoint. OK, if not, we can move to the second round, which is going to look at indigeneity. With radical shifts, do we need an artificial intelligence infrastructure for an economy and society attentive and accountable to the people? And I will invite Ambassador Aridha Ambaki, Government of India, to comment, and Sarah Nicole from Project Liberty Institute to also help us to address this dimension of local AI. Please, Ambassador. Thank you.


Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we balance between wider AI adoption, building models, building applications, vis-a-vis the energy challenges that we are there, which hampers in some ways our goals towards sustainable development that we had all agreed. So, it’s not an easy challenge for governments across, because on one hand we want to take advantage of the benefits that are going to come, and on the other hand we want to limit the risks that are coming on climate change and sustainable development. So, the approach towards local AI sounds, seems to be good, but to make that happen there will be several necessary ingredients to that. Many of it was highlighted by our speaker from UNCTAD very succinctly, but I would like to just mention that what we observe in India, in many ways, given the diversity that we have, given the languages that we have, linguistic diversity, cultural diversity, contextual diversity that we have, is kind of a microcosm of the whole world. How do we ensure that whatever we build in a country of our size and magnitude applies to all sections of society, everybody becomes included in that. So, in that, one key challenge of course relates to infrastructure, because AI compute infrastructure is scarce, it’s expensive, it’s not easy to get, very few companies control it, and to do that, and if you democratize access to compute infrastructure, the model that we adopted in India was to ensure that we create a common central facility through which, of course provided through private sector providers, but this compute should be available to all researchers, academicians, startups, industry, those people who are training models or doing inferencing or building applications, and this compute, we worked out a mechanism so it becomes available at an affordable cost. We underwrite to the tune of almost 40% of the compute cost from the side of the government, so end user gets it at a rate which is less than a dollar per GPU per hour. So, this model has worked and I do believe the solution that was proposed earlier, building a CERN for AI, so if we can create a global compute infrastructure facility across countries or several foundations, multilateral bodies joining in and creating this infrastructure, making it available, it can really and Nisha. So, we have to make sure that we really solve the access to infrastructure challenge that we have. The second key ingredient for building AI applications and models is, of course, about data. How do we ensure that we have data sets available? So, it’s okay to desire to have local AI models, contextual models, but until we have necessary data sets in all languages and all contexts, all cultures, it will not really happen. So, we have to make sure that we have data sets in all languages and all contexts. So, we have to make sure that we have data sets in English and maybe major Indian languages, but when it came to minor Indian languages, we had very limited data sets. We launched a crowd-sourcing campaign to get linguistic data across languages, across cultures, in which people could kind of come to a portal and contribute data sets. So, that has really helped. So, that model can, again, be kind of made global, and that’s what we are trying to do. So, we have to make sure that we have data sets in all languages, as well as contextual and linguistic data sets. That can be, again, an innovative solution towards making the data sets more inclusive and more global. The third key ingredient on which we need to kind of enable, even if we have to push local AI, is about capacity-building and skills. Like, AI talent is also rare and scarce, it’s limited. So, we need to make sure that we have capacity-building and skills, and we need to make sure that we have capacity-building and skills, and we need to make training to students and to AI entrepreneurs with regard to how to train models, how to wire up even 1,000 GPUs. It requires necessary skills. If we can take up a capacity-building initiative driven by a central initiative through the UN body or the global partnership on AI, and ensure that all those capacity-building initiatives are implemented, it can really, really help. So, if we can take up capacity-building initiatives and training, and doing inferencing and building models and using AI for solving societal problems, it can really, really help. of course, build use cases, AI use cases in key sectors, whether it’s healthcare, whether it’s agriculture, whether it’s education, and create a global repository of AI applications which can be shareable across geographies. If we are able to take these three steps across the infrastructure, data sets, you know, training, capacity building, and building use cases, repository of use cases, I think we’ll be able to push forward the agenda of adoption of AI and building local AI at some stage. Absolutely, Ambassador. Definitely, AI models have to reflect contextually grounded innovation norms and ethics. Then I would like to invite Sarah Nicol from Project Liberty Institute.


Sarah Nicole: Please share your thoughts with us on this issue. Yeah, thank you very much for the invitation to give this short lightning talk. And thank you for the first insight as well. I will be a little bit controversial and really appreciate the way the question was framed. So I really appreciate the radicality aspect in it. Because the mainstream view is really that AI is a completely disruptive technology, that it changes everything in our societies, in our economies, in our daily life. But I would argue quite the contrary. AI is essentially a neural network, right, that replicates the way the brain works. It analyzes specific data sets. From those data sets, it finds connection, creates patterns, uses those patterns to respond to certain tasks like prompt, search, and so on. So overall, AI is an automation tool. It is a tool that accelerates and amplifies everything that we know. So necessarily, the current structure that is highly centralized and that strips users’ data out of their control is reinforced by AI. and it also reinforced the big tech companies and everything that we’ve been knowing for decades. It benefits from the centralization of the digital economy that is necessary to train its model. So, AI is very much so the result of the digital economy that has been in place for multiple, multiple years. So, if AI is a continuity and an amplification of what we already know, then the radicality needs to come from the response that we’ll bring to it. And at Project Liberty Institute, we believe that every people, users, citizen, call it what you want, deserves to have a voice, a choice and stake in their digital life. And this goes first by giving users data agency. This requires infrastructure design changes, profound one. In digital economy, data is not just a byproduct. It is a political, social and economic power that is deeply tied to our identities. And most of the network infrastructure that is currently in place has been captured by a few dominant tech platforms. So, necessarily everything that is built under falls under this proprietary realm. Scrapping, of course, empowerment of users, transparency, privacy and so on. So, AI rapidly shapes everything that we’re doing in our life. So, we need to rethink this infrastructure model because it shapes data agency. And Anita, you’ve been great to launch this report with us in Berlin last month. So, I’ll be happy to share also this report that we wrote for policymakers specifically to equip them with thinking how to digital infrastructure questions. But infrastructure for agency is really what we’re focusing on at the Institute. So, we are the steward of an open source protocol that is called a DSNP, it builds directly on top of TCPIP and allow users . And finally, DSNP allows users better control of their own data by enabling them to interact with a global, open, social graph. What this means is that your social identity on DSNP is not tied to one specific platform like it is today in most tech platforms, but it exists independently, and so it allows transportability of your data, but also interoperability. So, this is a core part of infrastructure that represents a radical shift for an economy and society, attentive and accountable to the people. But unfortunately, this would be a little bit too good to be true if all that was needed was a few lines of code and some spec and protocols. As important is the business model, and there’s a lot of work to be done here, because to this day, the most lucrative business model is the one that scraps data, users’ data, and then uses it for advertising, and we have yet to find a scalable alternative to this. And in order to build what we call the fair data economy, we are in need of metrics. We need to be better at articulating what we mean by safety, responsibility, privacy, what exactly do we mean behind this beautiful world? So, we need qualitative and quantitative metrics to define all this. Likewise, we need to go beyond the do-no-harm principle and also go beyond the do-no-harm principle to really shape a positive vision of technology that is socially and financially benefiting everyone. And one of the approaches that we are exploring at the institute is the one of data co-operative. The co-operative model has a legacy of hundreds of years, and it’s actually pretty well fit for the age of AI. Here’s a recent audio clip that was produced by an astrophysicist at the with those who wanted to, but let me extract two points from this report that I think is interesting for the sake of this discussion. Data cooperative allows to rethink the value of data in a collective manner, and I think that’s very important because the debate is very much structured around personal data and individual data, but the issue is so structural that we need to empower users with collective bargaining tools against suspected cooperation. And the second point is, in the age of AI, data needs to be of high quality, and data cooperatives provide the right incentive for data contributors to improve the quality of their data, because then it contributes to greater financial sustainability of their own co-op, so it’s also for data-pulling purposes. And of course there are many other models that exist, data commons, data trust, you name it. A radical shift for a better economy anyway will need many tries, many stakeholders to be involved, and we are already seeing this every day in multiple communities across the world. But one last thing that I wanted to mention here today is, what I just said, I don’t think this should be considered as radical at all. We own our identity in the analog world, we don’t accept others to make billions on top of our own identity, so why should it be that different in the online world? So all in all, the goal is really to have a voice, a choice, and stake online, and I don’t think this is radical, I think this is pretty much common sense.


Valeria Betancourt: Thanks. Thank you, thank you Sara, I think you have helped us to pave the way very nicely to the next round of conversation, because if we want AI to be meaningful to people, the intention behind it is absolutely crucial. And with that, I would like to invite Ambassador Thomas Schneider from the government of Switzerland and Nandini Chami from IT4Change to address the question on how should AI innovation… And now, I would like to ask you to share your views on how we can make the transition pathways be steered from the common good, with that intention of the common good.


Thomas Schneider: Please share your views on that. Ambassador, welcome. Thank you, and thank you for making me part of this discussion, because this is a discussion of fundamental importance that is also something that, maybe not necessarily a poor, but definitely a small country like mine, that we’re seeing, and you’ve highlighted some of the aspects. How can a small actor cope, survive, call it whatever you want, in such a system where, by design, the big ones have the resources, have the power? But the question is, does it have to be like this, or is it just, would there be alternatives? And I think we have already heard a number of elements where you actually can, how would the small ones need to cooperate in order to benefit from this as well? And of course, we know about the risks and all of this, but I think it would be a mistake not to use these technologies, because the potential is huge. And being an economist and a historian, and not a lawyer, actually, much of this reminds me of the first revolution in the industrial revolution, where, for instance, Switzerland was a country that was lagging behind. They had already trains and railways in the UK, and we were still walking around in the mountains. But then we were catching up quite quickly. But it wasn’t just enough to buy locomotives and coaches from the UK or produce them ourselves. We had to realize that you need to build a whole ecosystem in order to allow you to use this technology to make it your own, and some of it has been mentioned. What struck me, lately I just read an article about the extinguishing of the Credit Suisse, of the Swiss bank, and it struck me again, this bank was created by the politician and his people that were actually bringing the railways to Switzerland and building the railway system. So what did they do? They did not just buy coaches and build railways and bridges and tunnels. They also built the 88 Zurich. So they knew we need to have engineers. We need to have people that have the skills to actually drive these things, build the infrastructure. So they did not just create the railway. They created the first universities like in polytechnical universities. And they created, they knew like we are a small country, we do not have the resources. We need somebody that gives us credit. We need to have a financial system around it and it also connects you. You can have nice ideas, but if you do not get the resources for them, nothing happens. And that was remarkable that this was all through basically by one person plus his team in the 1840s and 50s. And I think we need to understand and I think we have heard a lot of input. What do we need? Each community for herself, but also in order to be able to create our own ecosystem and how to cooperate with others that are in the same situation. It can be communities in the different countries. It can actually also be communities at the other end of the world. But that may actually create a win-win situation with you. So I think this is really important and for the small actors, how can we break this vicious cycle of scaling effects that you cannot deliver? And we have heard also some elements that are important for us in Switzerland. The cooperative model is actually something much of our success stories economically are actually still cooperatives. The biggest supermarket in Switzerland was created 100 years ago as a cooperative. It is still a cooperative, not as much as it used to be, but legally it is a cooperative. Every customer can actually… We actually vote, so every few years there’s a discussion, should this supermarket be able to sell alcohol or not? And they want to, but the people say no. And we have insurances that are cooperatives and so on, so that’s an element. And another element is sharing the computing power. In Switzerland we’ve been working with NVIDIA to develop their chips 10 years ago, and now we have the result, we have one of the 10 biggest supercomputers, apart from the private ones of course of the big companies, that is in Switzerland. We cooperate with Lumi, with the Finns, and we try to create a network, we’ve started to set up a network to share computing power across the world for small actors, universities and so on. So this initiative is called ICANN, I-C-A-N. So there’s lots of things to do, and I think if we do a nice summary of the elements that we have heard so far, we can actually, yeah, that gives us some guidance for the next steps.


Valeria Betancourt: Thank you, thank you Ambassador Nandini, please help us with your views. It’s a very interesting conversation, and I think we are having this at a very timely moment,


Nandini Chami: when there is a recognition that if we are talking about a just and sustainable digital transition, we need to get out of the dominant AI paradigm and move towards something else. So I’ll just begin by sharing a couple of thoughts about challenges that we face in terms of steering AI innovation pathways for the common good. And these reflections come from the UNDP’s Human Development Report of 2025, which focuses on the theme, people and possibilities in the age of AI. So the first challenge that, you know, in this report we find is that in terms of shaping the trajectories of AI innovation, private value and public value creation goals are not always necessarily or automatically aligned. And to quote from a report, despite AI’s potential to accelerate technological progress and scientific discovery, current innovation incentives are geared towards rapid deployment, scale, and automation, often at the expense of transparency, fairness, and social inclusion. So how do we shape these with intentionality and consciously? That is very important. The second insight from this report is that since development is a path-dependent project, these path dependencies mean that AI adoption does not automatically open up routes to economic diversification. We just heard reflections on ecosystem strengthening, and this report also adds to the similar lens that the economic structures in many developing countries and LDCs may limit the local economy’s potential to absorb productivity spillovers from AI, and there may be fewer and weaker links to high-value added activities. So this actually means that there needs to be a complementarity between development roadmaps and AI roadmaps, and the objectives of development, the specific contextual strength opportunities challenges and weaknesses mapping in terms of where the potential for economic diversification lies, and where we use AI as bridge-building, as a general-purpose technology. These become extremely contextually grounded activities to do, and we need to move beyond an obsession with AI economy roadmap development as just a technological activity and look at it as an ecosystem activity. So from this perspective, I would just like to share from our work at IT4Change about three to four reflections on what it would take to make techno-institutional choices that will shape these innovation trajectories in these directions that we seek. So first, we come to the issue of technology foresight, and in the panel also we were discussing the question of do-no-harm principle. Oftentimes in these debates, we hear a discourse of inevitability of AI as a Frankenstein technology that will just definitely go out of control, and there’s a lot of long-termist alarmism about we will no longer be able to control AI. But what happens is this starts distracting from setting limits on AI development in the here and now, which actually means that in operationalizing and actionizing the do-no-harm principle, instead of moving fast and breaking things, we probably need to go back to the precautionary principle of the Rio Declaration about what we need to do to shape technologies. And secondly, as the Aarhus Convention on Environmental Matters specifies in the context of environmental decision-making, we need to be talking about right of the public to access information and participate in AI decision-making, so we are not just looking at rights of affected parties in the AI harms discourse. The second point is that in AI value chains which are transnational, which are very complex, and which have multiple actors and system providers and deployers and subject citizens on whom AI is finally deployed, how do we fix liability for individual, collective, and societal harms, and how do we update our product fault liability regimes so that the burden of proof is no longer on the affected party to prove the causal link between the defect in a particular AI product or service and the harm that was suffered? , they are the founders of OpenAI, and we are very proud to be a part of that, given the black box nature of this technology, thinking this through becomes very important. And thirdly, when we look at the technological infrastructure choices, of course, OpenAI affordances become very important as a starting point, but it’s also useful to remember that they don’t automatically become a part of the technology, and they do need to be part of the technology, but it’s very important to remember that there are many barriers to innovation and inclusivity, as experiences of how we build open-source AI on top of the stacks have shown, where it’s very much possible that a big tech firm’s dominant firms are able to use the primary infrastructure, and that’s why OpenAI is so important. So, I think that’s what this research shows. And my last point is actually about policy support for fostering alternatives, particularly federated AI commons thinking. So, there are alternative visions such as community AI that focus on looking at task-specific experiences in specific communities at IT4Change. We are exploring the development of such a model with the public school education system in Kerala, for instance. There have also been proposals that have been made in G20 discussions as part of the T20 dialogues about how do we shape public procurement policies and the directions of public research funding for the development of shared compute infrastructure, which came up in our discussion. And also, how do we ensure that in the participation of different market actors on public AI stacks and the use of public AI computing?


Valeria Betancourt: Thank you so much, Nandini. Let me check with Sadhana if there are remote participants who want to make interventions or have questions, and I also invite you all to also get ready with your questions and comments and reactions if you have one. Thank you, Valeria.


Sadhana Sanjay: I hope everyone can hear me. There is one question in the chat from Timothy who asks, digital transformation is built upon intellectual property rights frameworks, means of ownership and trade. When considering existing trends, projects and works that are resourced versus those that lack resourcing, how are the natural legal persons provided the necessary support to retain legal agency, both for themselves as well as to support traditional roles such as those of a parental guardian or others? Thank you, Sadhana. Is anyone in the panel? Anyone in the panel who would like to address that question? I didn’t hear the question clearly. This is about intellectual property. If you could repeat the second half, I got the first half, but not the second. If I understand correctly, the question is asking, given that there are ownership rights conferred on the developers of AI and non-natural legal persons such as corporations, the question is about how can natural legal persons such as ourselves retain our rights and agencies over the building blocks of AI, both individually as well as those who might be in charge of us such as guardians and custodians?


Abhishek Singh: One part is that, of course, the way the technology is evolving, there is IP-driven solutions and there are open-source solutions. So what we need to emphasize is to promote open-source solutions to the extent possible so that more and more developers get access to the APIs and they can build applications on top of it. The second part of it is that, with regard to… like, ultimately, somebody has to pay for these solutions. Like, it’s not that everything will come for free. And those companies which are known to provide services for free, they monetize your data. We all know about it. There have been big tech companies who are indulging in that. So at some point of time, we’ll have to take a call, whether if I want to use a service, like you mentioned, a chat GPT service, which kind of helps me in improving my efficiency, my productivity. Either I pay for their service or I contribute to their assets. So that call, individuals, companies, societies will need to take, that what is the cost of convenience or what is the cost of getting a service, in what form we can do. The other part which can be done, which is very complex, is to work out a marketplace kind of mechanism in which every service is priced. So, if we are contributing data sets, if I’m contributing to building a corpus in a particular language, then can we incentivize those who are contributing the data sets? In fact, there is an initiative, there is a company called Karya in India, which is doing that, which is actually paying people for contributing data sets, which kind of ensures that those who are part of the ecosystem, they do that. Then there are companies which have started incentivizing delivery boys, food delivery boys and cab drivers, Uber drivers, so that when they drive around, they get details about city amenities, about garbage dumps, about missing manual covers, street lights, traffic lights not functioning, sharing that information with the city government. and then they get in turn paid for doing that service. So, if the way a data contributor in what form is contributing, there can be models, there can be mechanisms in which a cost and revenue sharing model can be developed. But it will require specific approaches to the specific use case, but it’s not that cannot be done.


Valeria Betancourt: Thank you, Ambassador. Maybe if I can add, there’s a number of good examples. First of all, property rights are not carved in stone.


Thomas Schneider: This is something that can be and will need to be reformed, renegotiated. With what outcome and how, this is another question. Because otherwise, in many ways, property rights don’t work also for journalism, for media, in that part. So, we’ll have to develop a new approach and question what was the original idea behind property rights. The idea may be right, but then we need to find a new approach. That’s one element. And the other thing is like, this is what you may do on the political level, on the market level. And the other one is try to find ways to create a fair share system for benefits. And one is try to monetize it, like a kind of transaction, give every transaction or every data transaction a value. And the other thing is, and I think we’ve already heard this, go for, not think it only from the individual, but think it from society or from Switzerland. Also, we are a liberal country, but many things, people don’t want it to be privatized, because they think this should be in public hands. It’s like waste management or hospitals, it’s a very hot issue and so on. So, I think we should think about how, as a society, if we want to develop our health system, for instance. Health data is super important, it’s super valuable. And of course, the industry needs a lot of money to develop new pharmaceutical products. But how can we organize ourselves as a society, not because as an individual we are too weak, And the whole society can say, OK, we are offering something to businesses that can develop stuff that is OK, that they make money. But we want somehow a fair share of this because we are kind of your research lab. And then if you are a group, a big group, then you can actually have also a political weight. And then you need to find creative, concrete ways to actually then get this thing concretely done. So you need to work on the idea and the concept and on defining ways. But it’s a super important question. And if I can build on those two and fully agree with what’s been said.


Sarah Nicole: The question of having a stake in your data has often been framed on a personal level. And actual studies have shown that you would make very, very little if you were to monetize your own data. You know, the year would be like a couple of hundreds of euros or dollars. And the worst thing is that it could lead also to systems where poor people would probably spend lots of time online to generate very small revenue from this. So the answer will not be on an individual perspective, but it would be on a collective one. Because it’s when the data is aggregated, it’s when the data is in a specific context that then it gains value. And here again, let me bring the cooperative model. And that’s true. Theoretically, there’s a lot of work on data cooperative. Practically speaking, it’s still yet to emerge. But also one of the reasons is that it is not natural for businesses to turn into a cooperative model. Because it’s being perceived as this socialist or communist thing, which it is not. And hundreds of years of legacy have proven that. But there are many data cooperatives that pool specific data with a specific type of expertise. And then allow some AI to be trained on this expertise and high quality data. where we can have better rights and better protections for individuals once it is aggregated in common. So, really, the mentality really needs to shift from this personal data frame discussion, I think that benefits also a lot of the big tech companies, to a more collective and organization perspective.


Valeria Betancourt: Thank you. Anita.


Anita Gurumurthy: I don’t think that there’s an easy answer and I think we need to step up and rethink as people have said on this entire idea of what’s ownership. Two things I would like to say is that for developing countries particularly, I think in our global agreements on trade and intellectual property, we oftentimes cede our space to regulate in the public interest back in our countries. So often, transnational companies use the excuse of trade secrets to lock up data that otherwise should be available to public transportation authorities, public hospitals, etc. And perhaps we do need to strongly institute exceptions in IP laws for the sake of society to be able to use that threshold of aggregate data that is necessary to keep our societies in order. I’m sorry, I’m using that terminology in a very, very broad sense. But I mean, that is needed. You just can’t lock up that data and say it’s not available because it’s a trade secret. The second is that the largest source for the large language models, especially ChargPT, was Wikipedia. So you actually see free riding happening on top of these commons. And therefore, that’s another imperative, I think, for us to rethink the intellectual property regime on, well, we will do open source. But what if my open source meant for my community is actually servicing profiteering? So we do need laws to think through those data exchanges, whether it’s agricultural data, whatever data commons, or public data sets do need to protect society from free riding and also foul dealing. Foul dealing is when the exploitation really reaches a very, very high threshold. The last point I wanna make is we’ve been talking about the nudge economy that has generated the data sets, but what we read today is that there’s an economy of prompt. On top of AI models that you see when you search is the way in which you’re defining your prompts as users, and that is perfecting the large language models. So this is a complexity from nudge to prompt, which means that all of us are feeding the already monopolistic models with the necessary information for that to become more efficient. Which effectively means that the small can never survive.


Valeria Betancourt: So what do you do then for the small to survive is actually a question of societal commons so that this economy of prompt and economy of profiteering from prompt can actually be curtailed. And I think these are future questions for governance and regulation, but essentially also for international cooperation. That’s excellent. Okay, let me now invite your comment, please, or your question.


Audience: My name is Dr. Nermin Salim, the Secretary General of Creators Union of Arab. It’s a consultative status with the UN. And by accident, I’m expert intellectual property. So for comment for this question, I want just to comment about the intellectual property of AI. In the WIPO, the International Organization of Intellectual Property, they not yet reached the ideal convention for protecting AI because it’s divided between two sections. The AI as a data at the. . We are a platform for sharing content in a digital way, in a digital technology way, and the content which is generated by AI. But for this, we are at a civil society, have launched in the IGF last, IGF in Riyadh, a platform for protecting the content for users for digital area. When the users want to share their content, they find Whois, social media, internet, fax machines, Swish, we make a platform to submit and take a QR code, and verify by blockchain, go to the government, a minister siz responsible for registration, whoever gave them the 29 to the authority needs to verify it and customize it a little bit to obtain a personal property is taking into cza insid the case lóg conflict between users. That’s just a comment for the questions. We are a minute away from the end of the session. I would like to invite everyone of you, the panel, to share some final remarks. Yes. I would like to start with Nandini, who is the chair of the


Valeria Betancourt: panel, and I would like to invite her to share some final remarks. It’s available now as a demo. Thank you very much. Just very brief final remarks, like


Nandini Chami: 10 seconds with the highlight that you would like to leave the audience with, please. Let me start with you, Nandini. I think the discussion is showing us that there’s a long history of AI being a problem in the world. AI is a problem in the world, and while continuing to incentivize innovation and preserve common heritage, particularly in knowledge IP, AI is a new instantiation of that problem. Yes, Ambassador Schneider. Thank you, I just say this was really exciting and I hope we can follow up on this because it’s super important and I thank you really for this discussion. Sarah.


Sarah Nicole: Thank you will be the last thing as well. Ambassador. Yeah, my takeaway is that the


Abhishek Singh: cooperative model for infrastructure data sets works and then maybe for models and applications we need to push forward more for open source models without the concerns of IP and other stuff. Absolutely. I’m thinking that the public and the local cannot exist without each other. Yeah, absolutely, absolutely and thank you so much for your presence and not easy answers as you said and oh yes, I’m sorry, Jackie, please your final remarks. Yes, thank you. I think data is a very strategic and key asset for both AI and the digital economy and with that I just want to


Audience: share with you that we have recently established a multi-stakeholder working group on data governance so hopefully that could provide some recommendation on how we can develop a good data governance framework. Thank you. Absolutely, so not easy answers, some of the responses and solutions


Valeria Betancourt: are coming from the margins, from the academia, from the social movements and different groups impacted by digitalization so yes, let’s keep the conversation going and let’s use this space and hopefully also the WSIS last interview in order to be able to define the grounds for different approaches and a different paradigm for AI for the common good. So, thank you so much for your presence and to all of you for your contributions. Thank you so much. Thank you.


A

Anita Gurumurthy

Speech speed

149 words per minute

Speech length

1094 words

Speech time

438 seconds

AI investment doubled from $100-200 billion between 2022-2025, three times global climate adaptation spending

Explanation

Gurumurthy highlights the massive financial resources being directed toward AI development compared to climate adaptation efforts. This disparity shows misaligned priorities given the urgent need for climate action and the environmental costs of AI infrastructure.


Evidence

Statistics from UN Trade and Development Digital Economy Report showing AI investment doubling from $100 to $200 billion between 2022-2025, which is three times global spending on climate change adaptation


Major discussion point

Resource allocation priorities between AI development and climate adaptation


Topics

Development | Economic


Energy demand of data centers creates water disputes and climate concerns despite chip efficiency improvements

Explanation

Despite technological improvements in chip efficiency, the overall energy and water consumption of AI infrastructure continues to grow. Market trends suggest these efficiencies will support larger, more complex models rather than reducing environmental impact.


Evidence

References to water disputes occurring globally due to data center demands and the trend toward bigger, more complex large-language models that offset marginal energy savings


Major discussion point

Environmental sustainability of AI infrastructure


Topics

Development | Infrastructure


Western cultural homogenization through AI platforms amplifies epistemic injustices and erases cultural histories

Explanation

Current AI systems, dominated by Western perspectives and English language, are not just excluding non-English speakers but actively changing worldviews and erasing diverse cultural knowledge systems. This represents a form of digital colonialism that threatens cultural diversity.


Evidence

Discussion of anglocentric perspective in AI development and how LLMs change ways of thinking and erase cultural histories


Major discussion point

Cultural preservation and decolonization in AI development


Topics

Sociocultural | Human rights principles


Need to retain multilingual society structures and decolonize scientific advancement in AI

Explanation

Preserving multilingual societies is essential because different language structures enable different ways of thinking and understanding the world. Decolonizing AI means building computational systems that reflect diverse epistemologies rather than imposing a single worldview.


Evidence

Emphasis on how multilingual structures allow different ways of thinking and the need to build ‘our own computational grammar’


Major discussion point

Decolonization and multilingualism in AI


Topics

Sociocultural | Human rights principles


Agreed with

– Wai Sit Si Thou
– Abhishek Singh

Agreed on

Community-led and contextual approaches are necessary for meaningful AI development


Tension exists between necessary pluralism and generalized models dominating market development

Explanation

There’s a fundamental conflict between the need for diverse, culturally-specific AI models and the market’s tendency toward unified, generalized systems. This tension represents the key challenge in developing truly inclusive AI that serves different communities.


Evidence

Discussion of the ‘sweet spot of investigation’ lying in the tension between pluralism and generalized models


Major discussion point

Balancing diversity with scalability in AI development


Topics

Sociocultural | Economic


Trade secrets shouldn’t lock up data needed by public institutions like hospitals and transportation authorities

Explanation

Transnational companies often use intellectual property protections to prevent public institutions from accessing data that would be beneficial for society. This creates barriers to public service delivery and societal functioning.


Evidence

Examples of public transportation authorities and public hospitals being denied access to data due to trade secret claims


Major discussion point

Public interest exceptions in intellectual property law


Topics

Legal and regulatory | Human rights principles


Agreed with

– Thomas Schneider
– Sadhana Sanjay

Agreed on

Current intellectual property frameworks are inadequate and need reform for the AI era


Large language models free-ride on commons like Wikipedia, requiring protection from exploitation

Explanation

Major AI systems like ChatGPT have been trained extensively on freely available resources like Wikipedia, representing a form of exploitation of digital commons. This highlights the need for legal frameworks to protect community-created resources from commercial exploitation.


Evidence

Specific mention that Wikipedia was the largest source for large language models, especially ChatGPT


Major discussion point

Protecting digital commons from commercial exploitation


Topics

Legal and regulatory | Economic


Agreed with

– Sarah Nicole
– Thomas Schneider

Agreed on

Individual data monetization is insufficient; collective approaches are more viable


W

Wai Sit Si Thou

Speech speed

140 words per minute

Speech length

951 words

Speech time

407 seconds

AI divide exists with NVIDIA producing 90% of GPUs, creating significant infrastructure inequality

Explanation

The concentration of critical AI infrastructure in the hands of a single company creates massive inequalities in access to AI capabilities. This monopolistic control over essential computing resources represents a fundamental barrier to democratizing AI development.


Evidence

Statistic that NVIDIA produces 90% of GPUs, which are critical components for AI computing resources


Major discussion point

Monopolization of AI infrastructure


Topics

Infrastructure | Economic


Three key drivers for inclusive AI: infrastructure, data, and skills with focus on equity

Explanation

Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructure, availability of diverse datasets, and development of necessary technical skills. These elements must be developed with explicit attention to equity rather than assuming market forces will provide fair access.


Evidence

Framework analysis showing AI divide across infrastructure, data, skills, R&D, patents, and scientific publications


Major discussion point

Foundational requirements for inclusive AI development


Topics

Development | Infrastructure


Worker-centric approach needed focusing on AI complementing rather than replacing human labor

Explanation

Rather than following historical patterns of automation that replace workers, AI development should prioritize applications that enhance human capabilities and create meaningful employment. This requires intentional design choices and policy interventions to steer technology toward complementary rather than substitutional uses.


Evidence

Four-channel framework showing automation vs. complementation paths, with emphasis on right-hand side channels of complementing human labor and creating new jobs


Major discussion point

Human-centered AI development approach


Topics

Economic | Development


AI solutions must work with community-led data and indigenous knowledge for local contexts

Explanation

Effective AI applications for local communities require incorporating community-generated data and traditional knowledge systems rather than relying solely on external datasets. This approach ensures AI solutions address specific local problems and contexts.


Evidence

Emphasis on working with community-led data and indigenous knowledge to focus on specific local problems and issues


Major discussion point

Community-centered AI development


Topics

Sociocultural | Development


Agreed with

– Abhishek Singh
– Anita Gurumurthy

Agreed on

Community-led and contextual approaches are necessary for meaningful AI development


AI solutions should work offline to serve populations without internet access

Explanation

Given that one-third of the global population lacks internet access, AI solutions must be designed to function without constant connectivity. This technical requirement is essential for ensuring AI benefits reach underserved communities.


Evidence

Statistic that one-third of global population lacks internet access, making offline AI solutions essential


Major discussion point

Technical accessibility for underserved populations


Topics

Development | Infrastructure


Simple interfaces needed to enable broader user adoption of AI solutions

Explanation

AI systems must be designed with user-friendly interfaces that don’t require technical expertise to operate. This design principle is crucial for democratizing access to AI benefits across different skill levels and educational backgrounds.


Evidence

Emphasis on simple interfaces as key takeaway for promoting inclusive AI adoption


Major discussion point

User experience design for inclusivity


Topics

Development | Sociocultural


CERN model could provide shared AI infrastructure through pooled resources from multiple stakeholders

Explanation

The collaborative model used by CERN for particle physics research could be adapted for AI infrastructure, allowing multiple countries and organizations to pool resources for shared computing capabilities. This approach could democratize access to expensive AI infrastructure.


Evidence

Reference to CERN as world’s largest particle physics laboratory in Geneva and its successful resource-pooling model


Major discussion point

International cooperation models for AI infrastructure


Topics

Infrastructure | Development


Agreed with

– Abhishek Singh
– Thomas Schneider

Agreed on

Shared infrastructure and resource pooling are essential for democratizing AI access


South-South cooperation can address common challenges like training AI with regional languages

Explanation

Countries in the Global South can collaborate to overcome individual limitations in AI development, such as insufficient data for training models in shared languages. Regional cooperation can achieve what individual countries cannot accomplish alone.


Evidence

Example of East African countries pooling resources to train AI models in Swahili, which Rwanda alone couldn’t achieve


Major discussion point

Regional cooperation for AI development


Topics

Development | Sociocultural


Multi-stakeholder working group on data governance needed to develop good framework recommendations

Explanation

Given the strategic importance of data for both AI and the digital economy, a collaborative approach involving multiple stakeholders is necessary to develop effective governance frameworks. This multi-stakeholder model can provide comprehensive recommendations for data governance.


Evidence

Announcement of recently established multi-stakeholder working group on data governance


Major discussion point

Collaborative governance approaches for data


Topics

Legal and regulatory | Development


A

Abhishek Singh

Speech speed

177 words per minute

Speech length

1379 words

Speech time

466 seconds

India created shared compute infrastructure with government subsidizing 40% of costs to democratize access

Explanation

India addressed the challenge of expensive and scarce AI computing resources by creating a centralized facility that provides affordable access to researchers, academics, startups, and industry. Government subsidies make GPU access available at less than a dollar per hour, demonstrating a viable model for democratizing AI infrastructure.


Evidence

Specific details of 40% government subsidy and pricing at less than a dollar per GPU per hour for end users


Major discussion point

Government intervention to democratize AI infrastructure access


Topics

Infrastructure | Economic


Agreed with

– Wai Sit Si Thou
– Thomas Schneider

Agreed on

Shared infrastructure and resource pooling are essential for democratizing AI access


Disagreed with

– Nandini Chami

Disagreed on

Speed vs. Precaution in AI Development


Crowd-sourcing campaigns for linguistic datasets across languages and cultures can democratize data access

Explanation

When facing limited datasets for minor Indian languages, India launched crowd-sourcing initiatives that allowed people to contribute linguistic data through online portals. This approach can be scaled globally to address data scarcity for underrepresented languages and cultures.


Evidence

Description of portal-based crowd-sourcing campaign for linguistic data across Indian languages and cultures


Major discussion point

Community participation in AI dataset creation


Topics

Sociocultural | Development


Agreed with

– Wai Sit Si Thou
– Anita Gurumurthy

Agreed on

Community-led and contextual approaches are necessary for meaningful AI development


Global repository of AI applications in healthcare, agriculture, and education should be shareable across geographies

Explanation

Creating a centralized collection of AI use cases in critical sectors like healthcare, agriculture, and education would enable knowledge sharing and prevent duplication of effort across different regions. This repository approach could accelerate AI adoption for social good globally.


Evidence

Emphasis on building use cases in key sectors and creating shareable repositories across geographies


Major discussion point

Knowledge sharing for AI applications in social sectors


Topics

Development | Sociocultural


Capacity building initiatives needed for training on model development and GPU management skills

Explanation

The scarcity of AI talent requires systematic capacity building efforts to train people in technical skills like model training and managing large-scale computing resources. This skills development is essential for enabling local AI development capabilities.


Evidence

Mention of training needs for wiring up 1,000 GPUs and other technical AI development skills


Major discussion point

Technical skills development for AI


Topics

Development | Infrastructure


Marketplace mechanisms could incentivize data contributors through revenue sharing models

Explanation

Rather than having companies monetize user data without compensation, marketplace systems could be developed where data contributors receive payment for their contributions. This approach recognizes the value of data and provides fair compensation to those who generate it.


Evidence

Examples of Karya company paying people for contributing datasets and incentivizing delivery workers to share city information with governments


Major discussion point

Fair compensation for data contribution


Topics

Economic | Legal and regulatory


Agreed with

– Sarah Nicole
– Thomas Schneider

Agreed on

Cooperative models are viable and proven solutions for AI governance and data management


Disagreed with

– Sarah Nicole

Disagreed on

Individual vs. Collective Data Monetization Approaches


S

Sarah Nicole

Speech speed

164 words per minute

Speech length

1326 words

Speech time

484 seconds

AI is automation tool that amplifies existing centralized structures rather than disrupting them

Explanation

Contrary to mainstream narratives about AI being completely disruptive, it actually functions as a neural network that analyzes data and finds patterns, essentially automating and accelerating existing processes. This means AI reinforces current power structures and centralization rather than fundamentally changing them.


Evidence

Technical explanation of AI as neural networks that replicate brain functions and analysis of how AI benefits from existing digital economy centralization


Major discussion point

AI as continuity rather than disruption


Topics

Economic | Sociocultural


Disagreed with

– Valeria Betancourt

Disagreed on

AI as Disruption vs. Continuity


Users deserve voice, choice, and stake in digital life through data agency and infrastructure design changes

Explanation

People should have meaningful control over their digital existence, which requires fundamental changes to how digital infrastructure is designed. This goes beyond surface-level privacy controls to restructuring the underlying systems that govern digital interactions.


Evidence

Discussion of data as political, social, and economic power tied to identities, and mention of DSNP protocol development


Major discussion point

User empowerment through infrastructure redesign


Topics

Human rights principles | Infrastructure


Data cooperatives provide collective bargaining power and incentivize high-quality data contribution

Explanation

Cooperative models allow users to collectively negotiate with technology companies rather than being powerless as individuals. Additionally, when people have ownership stakes in data cooperatives, they’re incentivized to contribute higher quality data since it benefits their own cooperative’s financial sustainability.


Evidence

Reference to cooperative model’s hundreds of years of legacy and explanation of financial incentives for data quality in cooperative structures


Major discussion point

Collective organization for data rights


Topics

Economic | Legal and regulatory


Agreed with

– Thomas Schneider
– Abhishek Singh

Agreed on

Cooperative models are viable and proven solutions for AI governance and data management


Individual data monetization yields minimal returns; collective approaches through cooperatives more viable

Explanation

Studies show that individuals would earn very little money from monetizing their personal data – perhaps a few hundred dollars per year. Worse, this could create exploitative systems where poor people spend excessive time online for minimal income. Collective approaches through cooperatives offer more meaningful economic benefits.


Evidence

Specific mention of studies showing individual data monetization would yield only a couple hundred euros or dollars per year


Major discussion point

Economic viability of different data monetization models


Topics

Economic | Human rights principles


Agreed with

– Anita Gurumurthy
– Thomas Schneider

Agreed on

Individual data monetization is insufficient; collective approaches are more viable


Disagreed with

– Abhishek Singh

Disagreed on

Individual vs. Collective Data Monetization Approaches


Open source protocols like DSNP can enable user data portability and interoperability across platforms

Explanation

Technical solutions like the Decentralized Social Networking Protocol (DSNP) can be built on existing internet infrastructure to give users control over their social identity and data. This allows people to move their data between platforms and interact across different services without being locked into single platforms.


Evidence

Technical description of DSNP protocol building on TCP/IP and enabling global, open social graph with data transportability


Major discussion point

Technical solutions for user data control


Topics

Infrastructure | Human rights principles


N

Nandini Chami

Speech speed

137 words per minute

Speech length

1016 words

Speech time

444 seconds

Private value and public value creation goals in AI innovation are not automatically aligned

Explanation

The profit motives driving private AI development don’t naturally align with public interest goals like transparency, fairness, and social inclusion. Current innovation incentives prioritize rapid deployment and scale over social benefits, requiring intentional intervention to redirect these pathways.


Evidence

Quote from UNDP Human Development Report 2025 stating that innovation incentives favor rapid deployment and automation over transparency, fairness, and social inclusion


Major discussion point

Misalignment between private and public interests in AI


Topics

Economic | Human rights principles


Path dependencies mean AI adoption doesn’t automatically enable economic diversification in developing countries

Explanation

The existing economic structures in many developing countries may not be able to absorb and benefit from AI productivity gains. Without complementary development strategies, AI adoption may not lead to the economic transformation that countries hope for.


Evidence

Reference to UNDP report findings on limited local economy capacity to absorb AI productivity spillovers and weaker links to high-value activities


Major discussion point

Structural barriers to AI-driven development


Topics

Development | Economic


Precautionary principle should replace ‘move fast and break things’ approach in AI development

Explanation

Instead of the Silicon Valley mantra of rapid deployment followed by fixing problems later, AI development should adopt the precautionary principle from environmental law. This means carefully assessing potential harms before deployment rather than dealing with consequences afterward.


Evidence

Reference to Rio Declaration’s precautionary principle and critique of ‘move fast and break things’ mentality


Major discussion point

Risk management approaches in AI development


Topics

Legal and regulatory | Human rights principles


Disagreed with

– Abhishek Singh

Disagreed on

Speed vs. Precaution in AI Development


Public participation rights needed in AI decision-making beyond just addressing harms to affected parties

Explanation

Drawing from environmental law principles like the Aarhus Convention, the public should have rights to access information and participate in AI-related decisions that affect society. This goes beyond just protecting people from AI harms to giving them a voice in AI governance.


Evidence

Reference to Aarhus Convention on Environmental Matters and its principles for public participation in decision-making


Major discussion point

Democratic participation in AI governance


Topics

Human rights principles | Legal and regulatory


T

Thomas Schneider

Speech speed

172 words per minute

Speech length

1186 words

Speech time

412 seconds

Cooperative model has hundreds of years of legacy and fits well for AI age challenges

Explanation

Switzerland’s economic success stories include many cooperatives that continue to operate successfully, such as the country’s largest supermarket chain. This model, with its democratic governance and member ownership, provides a proven framework for organizing economic activity that could be applied to AI and data governance.


Evidence

Examples of Swiss cooperatives including the biggest supermarket created 100 years ago that still operates as a cooperative with customer voting rights, and cooperative insurance companies


Major discussion point

Historical precedents for cooperative organization


Topics

Economic | Legal and regulatory


Agreed with

– Sarah Nicole
– Abhishek Singh

Agreed on

Cooperative models are viable and proven solutions for AI governance and data management


Property rights need reform and renegotiation, with society-level approaches for fair benefit sharing

Explanation

Current intellectual property frameworks may not be suitable for the AI age and will need to be reformed. Rather than thinking only at the individual level, societies need to organize collectively to ensure fair sharing of benefits from AI development, similar to how some countries handle healthcare or infrastructure as public goods.


Evidence

Examples of Swiss public services like waste management and hospitals that remain public rather than privatized, and discussion of health data as valuable public resource


Major discussion point

Collective approaches to intellectual property and benefit sharing


Topics

Legal and regulatory | Economic


Agreed with

– Sarah Nicole
– Anita Gurumurthy

Agreed on

Individual data monetization is insufficient; collective approaches are more viable


Switzerland developed supercomputer network and ICANN initiative to share computing power globally for small actors

Explanation

Switzerland has created infrastructure sharing arrangements, including cooperation with Finland’s Lumi supercomputer and the ICANN network, to provide computing access to universities and small actors globally. This demonstrates how smaller countries can collaborate to access AI infrastructure.


Evidence

Mention of cooperation with NVIDIA on chip development, having one of the 10 biggest supercomputers, and the ICANN initiative for sharing computing power


Major discussion point

International cooperation for AI infrastructure access


Topics

Infrastructure | Development


Agreed with

– Wai Sit Si Thou
– Abhishek Singh

Agreed on

Shared infrastructure and resource pooling are essential for democratizing AI access


Small countries need ecosystem approach similar to 19th century railway development including education and finance

Explanation

Drawing lessons from Switzerland’s 19th-century railway development, small countries need to build complete ecosystems around AI, not just acquire the technology. This includes creating educational institutions, financial systems, and skilled workforce – just as railway development required polytechnical universities and banks like Credit Suisse.


Evidence

Historical example of Swiss railway development in 1840s-50s requiring creation of polytechnical universities, financial institutions, and complete infrastructure ecosystem


Major discussion point

Holistic ecosystem development for emerging technologies


Topics

Development | Infrastructure


V

Valeria Betancourt

Speech speed

121 words per minute

Speech length

929 words

Speech time

457 seconds

Global Digital Compact underscores urgent imperative for digital cooperation to harness AI for humanity’s benefit

Explanation

The Global Digital Compact recognizes the critical need for international cooperation in AI development to ensure it serves human welfare. This cooperation is particularly important for ensuring AI benefits reach the Global South through contextually grounded innovation.


Evidence

Reference to Global Digital Compact and evidence from Global South pointing to importance of contextually grounded AI innovation


Major discussion point

International cooperation for beneficial AI development


Topics

Development | Human rights principles


Disagreed with

– Sarah Nicole

Disagreed on

AI as Disruption vs. Continuity


Local AI must be examined through three dimensions: inclusivity, indigeneity, and intentionality

Explanation

Understanding local AI requires analyzing how it can be inclusive of different communities, respectful of indigenous knowledge systems, and designed with intentional purpose for social good. These three dimensions are essential for AI that contributes to well-being of people and planet.


Evidence

Framework for the panel discussion structured around these three dimensions


Major discussion point

Comprehensive framework for evaluating local AI


Topics

Development | Sociocultural | Human rights principles


Public accountability is essential in how AI is conceptualized, designed, and deployed

Explanation

AI development cannot be left solely to private actors but requires mechanisms for public oversight and accountability throughout the entire lifecycle. This ensures AI serves public interest rather than just private profit.


Evidence

Emphasis on enabling public accountability as a must in AI development processes


Major discussion point

Democratic oversight of AI development


Topics

Legal and regulatory | Human rights principles


S

Sadhana Sanjay

Speech speed

151 words per minute

Speech length

193 words

Speech time

76 seconds

Intellectual property frameworks create challenges for natural persons retaining legal agency in AI systems

Explanation

Current IP frameworks favor corporations and non-natural legal persons in AI development, potentially undermining individual rights and agency. This raises questions about how individuals can maintain control and rights over AI systems that affect them, including in guardian-ward relationships.


Evidence

Question about how natural legal persons can retain agency given existing IP frameworks and ownership structures


Major discussion point

Individual rights versus corporate control in AI systems


Topics

Legal and regulatory | Human rights principles


Agreed with

– Anita Gurumurthy
– Thomas Schneider

Agreed on

Current intellectual property frameworks are inadequate and need reform for the AI era


A

Audience

Speech speed

172 words per minute

Speech length

299 words

Speech time

103 seconds

Blockchain-based platform needed for protecting user content and intellectual property in digital era

Explanation

A platform using QR codes and blockchain verification can help users protect their digital content by providing proof of ownership and creation. This system would work with government authorities to verify and register content, providing legal protection in case of disputes.


Evidence

Description of platform launched at IGF in Riyadh that provides QR codes and blockchain verification for content protection, working with government registration authorities


Major discussion point

Technical solutions for content protection and IP rights


Topics

Legal and regulatory | Infrastructure


WIPO has not yet reached ideal convention for protecting AI intellectual property due to division between AI as data platform and AI-generated content

Explanation

The World Intellectual Property Organization faces challenges in creating comprehensive AI IP protection because of fundamental disagreements about whether to focus on AI systems as data platforms or on the content they generate. This division prevents unified international standards for AI intellectual property.


Evidence

Reference to WIPO’s ongoing struggles and the specific division between treating AI as data platform versus focusing on AI-generated content


Major discussion point

International challenges in AI intellectual property regulation


Topics

Legal and regulatory | Development


Agreements

Agreement points

Cooperative models are viable and proven solutions for AI governance and data management

Speakers

– Sarah Nicole
– Thomas Schneider
– Abhishek Singh

Arguments

Data cooperatives provide collective bargaining power and incentivize high-quality data contribution


Cooperative model has hundreds of years of legacy and fits well for AI age challenges


Marketplace mechanisms could incentivize data contributors through revenue sharing models


Summary

Multiple speakers endorsed cooperative models as effective organizational structures for AI and data governance, drawing on historical precedents and emphasizing collective approaches over individual solutions


Topics

Economic | Legal and regulatory


Shared infrastructure and resource pooling are essential for democratizing AI access

Speakers

– Wai Sit Si Thou
– Abhishek Singh
– Thomas Schneider

Arguments

CERN model could provide shared AI infrastructure through pooled resources from multiple stakeholders


India created shared compute infrastructure with government subsidizing 40% of costs to democratize access


Switzerland developed supercomputer network and ICANN initiative to share computing power globally for small actors


Summary

All speakers agreed that expensive AI infrastructure requires collaborative approaches and resource sharing to ensure equitable access, with concrete examples from different countries and international models


Topics

Infrastructure | Development


Community-led and contextual approaches are necessary for meaningful AI development

Speakers

– Wai Sit Si Thou
– Abhishek Singh
– Anita Gurumurthy

Arguments

AI solutions must work with community-led data and indigenous knowledge for local contexts


Crowd-sourcing campaigns for linguistic datasets across languages and cultures can democratize data access


Need to retain multilingual society structures and decolonize scientific advancement in AI


Summary

Speakers consistently emphasized the importance of involving local communities in AI development and ensuring AI systems reflect diverse cultural and linguistic contexts rather than imposing homogeneous solutions


Topics

Sociocultural | Development


Current intellectual property frameworks are inadequate and need reform for the AI era

Speakers

– Anita Gurumurthy
– Thomas Schneider
– Sadhana Sanjay

Arguments

Trade secrets shouldn’t lock up data needed by public institutions like hospitals and transportation authorities


Property rights need reform and renegotiation, with society-level approaches for fair benefit sharing


Intellectual property frameworks create challenges for natural persons retaining legal agency in AI systems


Summary

Multiple speakers identified fundamental problems with existing IP frameworks in the context of AI, calling for reforms that better balance private rights with public interest and individual agency


Topics

Legal and regulatory | Human rights principles


Individual data monetization is insufficient; collective approaches are more viable

Speakers

– Sarah Nicole
– Anita Gurumurthy
– Thomas Schneider

Arguments

Individual data monetization yields minimal returns; collective approaches through cooperatives more viable


Large language models free-ride on commons like Wikipedia, requiring protection from exploitation


Property rights need reform and renegotiation, with society-level approaches for fair benefit sharing


Summary

Speakers agreed that individual-level solutions for data rights and monetization are inadequate, emphasizing the need for collective organization and protection of digital commons


Topics

Economic | Legal and regulatory


Similar viewpoints

Both speakers from IT4Change emphasized how current AI development serves private interests at the expense of cultural diversity and public good, requiring intentional intervention to redirect AI toward more equitable outcomes

Speakers

– Anita Gurumurthy
– Nandini Chami

Arguments

Western cultural homogenization through AI platforms amplifies epistemic injustices and erases cultural histories


Private value and public value creation goals in AI innovation are not automatically aligned


Topics

Sociocultural | Human rights principles | Economic


Both speakers emphasized the fundamental importance of capacity building and skills development as essential components of inclusive AI development, alongside infrastructure and data access

Speakers

– Wai Sit Si Thou
– Abhishek Singh

Arguments

Three key drivers for inclusive AI: infrastructure, data, and skills with focus on equity


Capacity building initiatives needed for training on model development and GPU management skills


Topics

Development | Infrastructure


Both speakers challenged mainstream narratives about AI being inherently disruptive, instead arguing for more cautious, deliberate approaches that recognize AI’s role in reinforcing existing power structures

Speakers

– Sarah Nicole
– Nandini Chami

Arguments

AI is automation tool that amplifies existing centralized structures rather than disrupting them


Precautionary principle should replace ‘move fast and break things’ approach in AI development


Topics

Economic | Legal and regulatory


Unexpected consensus

Government intervention and public investment in AI infrastructure

Speakers

– Abhishek Singh
– Wai Sit Si Thou
– Thomas Schneider

Arguments

India created shared compute infrastructure with government subsidizing 40% of costs to democratize access


CERN model could provide shared AI infrastructure through pooled resources from multiple stakeholders


Switzerland developed supercomputer network and ICANN initiative to share computing power globally for small actors


Explanation

Despite representing different political and economic contexts, speakers from India, UN agency, and Switzerland all endorsed significant government intervention and public investment in AI infrastructure, challenging typical market-driven approaches to technology development


Topics

Infrastructure | Economic | Development


Rejection of Silicon Valley ‘move fast and break things’ mentality

Speakers

– Nandini Chami
– Sarah Nicole
– Valeria Betancourt

Arguments

Precautionary principle should replace ‘move fast and break things’ approach in AI development


AI is automation tool that amplifies existing centralized structures rather than disrupting them


Public accountability is essential in how AI is conceptualized, designed, and deployed


Explanation

There was unexpected consensus across speakers from different backgrounds in rejecting the dominant Silicon Valley approach to technology development, instead advocating for more cautious, accountable approaches typically associated with environmental and public health regulation


Topics

Legal and regulatory | Human rights principles


Overall assessment

Summary

The speakers demonstrated remarkable consensus on the need for alternative approaches to AI development that prioritize collective organization, public accountability, and cultural diversity over market-driven solutions. Key areas of agreement included the viability of cooperative models, the necessity of shared infrastructure, the importance of community-led development, and the inadequacy of current intellectual property frameworks.


Consensus level

High level of consensus with significant implications for AI governance. The agreement across speakers from different sectors (government, UN agencies, civil society, academia) and countries suggests growing recognition that current AI development paradigms are insufficient for achieving equitable outcomes. This consensus provides a foundation for alternative policy approaches that emphasize public interest, collective action, and democratic participation in AI governance, challenging dominant narratives about inevitable technological disruption and market-led solutions.


Differences

Different viewpoints

Individual vs. Collective Data Monetization Approaches

Speakers

– Abhishek Singh
– Sarah Nicole

Arguments

Marketplace mechanisms could incentivize data contributors through revenue sharing models


Individual data monetization yields minimal returns; collective approaches through cooperatives more viable


Summary

Singh advocates for marketplace mechanisms where individuals can be paid for data contributions, citing examples like Karya company. Nicole argues individual monetization yields minimal returns and could exploit poor people, advocating instead for collective cooperative approaches.


Topics

Economic | Legal and regulatory


AI as Disruption vs. Continuity

Speakers

– Sarah Nicole
– Valeria Betancourt

Arguments

AI is automation tool that amplifies existing centralized structures rather than disrupting them


Global Digital Compact underscores urgent imperative for digital cooperation to harness AI for humanity’s benefit


Summary

Nicole presents AI as fundamentally non-disruptive, arguing it reinforces existing power structures. Betancourt frames AI as requiring urgent cooperative action for humanity’s benefit, implying transformative potential that needs guidance.


Topics

Economic | Sociocultural | Development


Speed vs. Precaution in AI Development

Speakers

– Nandini Chami
– Abhishek Singh

Arguments

Precautionary principle should replace ‘move fast and break things’ approach in AI development


India created shared compute infrastructure with government subsidizing 40% of costs to democratize access


Summary

Chami advocates for precautionary approaches and careful assessment before AI deployment. Singh focuses on rapid infrastructure development and deployment to democratize access, representing a more accelerated approach.


Topics

Legal and regulatory | Human rights principles | Infrastructure


Unexpected differences

Fundamental Nature of AI Technology

Speakers

– Sarah Nicole
– Other speakers

Arguments

AI is automation tool that amplifies existing centralized structures rather than disrupting them


Explanation

Nicole’s characterization of AI as fundamentally non-disruptive contrasts sharply with the general framing by other speakers who treat AI as a transformative technology requiring new approaches. This philosophical disagreement about AI’s nature is unexpected in a discussion focused on local AI solutions.


Topics

Economic | Sociocultural


Intellectual Property Protection vs. Commons Access

Speakers

– Audience (Dr. Nermin Salim)
– Anita Gurumurthy

Arguments

Blockchain-based platform needed for protecting user content and intellectual property in digital era


Large language models free-ride on commons like Wikipedia, requiring protection from exploitation


Explanation

The audience member advocates for stronger IP protection mechanisms while Gurumurthy argues for protecting commons from IP exploitation. This represents an unexpected fundamental disagreement about whether the solution is more or less IP protection.


Topics

Legal and regulatory | Infrastructure


Overall assessment

Summary

The discussion shows moderate disagreement on implementation approaches rather than fundamental goals. Main areas of disagreement include individual vs. collective data monetization, AI’s disruptive nature, development speed vs. precaution, and IP protection vs. commons access.


Disagreement level

Medium-level disagreement with significant implications. While speakers generally agree on the need for inclusive, locally-relevant AI, their different approaches to achieving this goal could lead to incompatible policy recommendations. The disagreements reflect deeper philosophical differences about technology’s role, market mechanisms, and the balance between innovation speed and social protection.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers from IT4Change emphasized how current AI development serves private interests at the expense of cultural diversity and public good, requiring intentional intervention to redirect AI toward more equitable outcomes

Speakers

– Anita Gurumurthy
– Nandini Chami

Arguments

Western cultural homogenization through AI platforms amplifies epistemic injustices and erases cultural histories


Private value and public value creation goals in AI innovation are not automatically aligned


Topics

Sociocultural | Human rights principles | Economic


Both speakers emphasized the fundamental importance of capacity building and skills development as essential components of inclusive AI development, alongside infrastructure and data access

Speakers

– Wai Sit Si Thou
– Abhishek Singh

Arguments

Three key drivers for inclusive AI: infrastructure, data, and skills with focus on equity


Capacity building initiatives needed for training on model development and GPU management skills


Topics

Development | Infrastructure


Both speakers challenged mainstream narratives about AI being inherently disruptive, instead arguing for more cautious, deliberate approaches that recognize AI’s role in reinforcing existing power structures

Speakers

– Sarah Nicole
– Nandini Chami

Arguments

AI is automation tool that amplifies existing centralized structures rather than disrupting them


Precautionary principle should replace ‘move fast and break things’ approach in AI development


Topics

Economic | Legal and regulatory


Takeaways

Key takeaways

Local AI development requires addressing three critical dimensions: inclusivity, indigeneity, and intentionality to ensure AI serves the common good rather than perpetuating existing inequalities


AI infrastructure inequality is severe, with massive investment disparities (AI investment 3x climate adaptation spending) and monopolistic control (NVIDIA controls 90% of GPUs)


Current AI models amplify Western cultural homogenization and epistemic injustices, erasing cultural histories and multilingual thinking structures


Cooperative models and shared infrastructure approaches can democratize AI access, as demonstrated by India’s subsidized compute infrastructure and Switzerland’s supercomputer sharing initiatives


Data governance must shift from individual to collective approaches, with data cooperatives providing better bargaining power and quality incentives than individual data monetization


AI is fundamentally an automation tool that amplifies existing centralized structures rather than disrupting them, requiring radical infrastructure changes for true user agency


The tension between necessary pluralism for local contexts and generalized models dominating the market represents a key challenge for inclusive AI development


Intellectual property frameworks need fundamental reform to prevent trade secrets from locking up data needed by public institutions and to protect commons from free-riding by commercial AI models


Resolutions and action items

Establish a CERN-like model for AI infrastructure sharing globally, pooling resources from multiple countries and organizations


Create global repository of AI applications in key sectors (healthcare, agriculture, education) that can be shared across geographies


Develop crowd-sourcing campaigns for linguistic datasets to support AI development in minoritized languages


Implement public procurement policies that steer AI development toward human-centric and worker-complementary solutions


Establish multi-stakeholder working group on data governance to develop framework recommendations


Create capacity building initiatives through UN bodies or global AI partnerships for training on model development and AI skills


Develop marketplace mechanisms for incentivizing data contributors through revenue sharing models


Reform intellectual property laws to include exceptions for public interest use of aggregated data


Unresolved issues

How to make small autonomous AI systems economically viable against dominant large-language models with massive scaling advantages


Finding scalable alternatives to data-scraping advertising business models that currently dominate the digital economy


Developing concrete metrics to define and measure safety, responsibility, and privacy in AI systems beyond ‘do no harm’ principles


Resolving the fundamental tension between open source AI development and preventing free-riding by commercial entities


Addressing the ‘economy of prompt’ where user interactions continue to improve monopolistic AI models


Determining how to fix liability for AI harms across complex transnational value chains with multiple actors


Establishing effective mechanisms for public participation in AI decision-making processes


Creating sustainable funding models for local AI development that don’t rely on exploitative data practices


Suggested compromises

Hybrid approach combining open source development with protections against commercial exploitation through reformed IP frameworks


Government subsidization of compute infrastructure costs (as demonstrated by India’s 40% cost underwriting) to balance private sector efficiency with public access


Society-level collective bargaining for data rights rather than purely individual or purely corporate control models


Balancing innovation incentives with precautionary principles by slowing ‘move fast and break things’ approach while preserving development momentum


Multi-stakeholder governance models that include private sector, government, and civil society in AI development decisions


Regional cooperation approaches (like East African countries pooling Swahili language data) to achieve necessary scale while maintaining local relevance


Public-private partnerships for AI infrastructure that leverage private sector capabilities while ensuring public benefit and access


Thought provoking comments

Between 2022 and 2025, AI-related investment doubled from $100 to $200 billion. By comparison, this is about three times the global spending on climate change adaptation… So the efficiencies in compute are really not necessarily going to translate into some kind of respite for the kind of climate change impacts.

Speaker

Anita Gurumurthy


Reason

This comment is deeply insightful because it reframes the AI discussion by introducing a critical tension between AI investment and climate priorities. It challenges the assumption that technological efficiency automatically leads to environmental benefits, revealing the paradox that AI efficiency gains are being used to build larger, more resource-intensive models rather than reducing overall environmental impact.


Impact

This comment established the foundational tension for the entire discussion, setting up the core dilemma that all subsequent speakers had to grapple with: how to democratize AI benefits while addressing planetary boundaries. It shifted the conversation from purely technical considerations to systemic sustainability concerns.


AI is essentially a neural network… So overall, AI is an automation tool. It is a tool that accelerates and amplifies everything that we know… So, if AI is a continuity and an amplification of what we already know, then the radicality needs to come from the response that we’ll bring to it.

Speaker

Sarah Nicole


Reason

This comment is profoundly thought-provoking because it directly challenges the mainstream narrative of AI as revolutionary disruption. By reframing AI as an amplification tool that reinforces existing power structures, it shifts the focus from the technology itself to the systemic responses needed to address its impacts.


Impact

This reframing fundamentally altered the discussion’s direction, moving away from technical solutions toward structural and infrastructural changes. It provided intellectual grounding for why radical responses are necessary and influenced subsequent speakers to focus more on systemic alternatives like cooperatives and commons-based approaches.


We reject the unified global system. But the question is, are these smaller autonomous systems even possible?… So this tension between pluralism that is so necessary and generalized models that seem to be the way, the only way AI models are developing in the market, is this tension is where the sweet spot of investigation actually lies.

Speaker

Anita Gurumurthy


Reason

This comment identifies the central paradox of local AI development – the need for cultural and linguistic diversity versus the economic and technical pressures toward centralized, generalized models. It articulates the core tension that makes this problem so complex and resistant to simple solutions.


Impact

This comment established the intellectual framework that guided much of the subsequent discussion. It helped other speakers understand why technical solutions alone (like shared computing infrastructure) need to be coupled with new governance models and cooperative approaches.


The largest source for the large language models, especially ChatGPT, was Wikipedia. So you actually see free riding happening on top of these commons… But what if my open source meant for my community is actually servicing profiteering?

Speaker

Anita Gurumurthy


Reason

This observation is particularly insightful because it reveals how current AI development exploits commons-based resources while privatizing the benefits. It challenges the assumption that open-source solutions automatically serve community interests and highlights the need for protective mechanisms.


Impact

This comment deepened the discussion about intellectual property and data governance, leading to more nuanced conversations about how to structure commons-based approaches that can’t be easily exploited by commercial interests. It influenced the later discussion about cooperative models and collective bargaining.


The question of having a stake in your data has often been framed on a personal level… the answer will not be on an individual perspective, but it would be on a collective one. Because it’s when the data is aggregated, it’s when the data is in a specific context that then it gains value.

Speaker

Sarah Nicole


Reason

This comment is insightful because it challenges the dominant framing of data rights as individual privacy issues and redirects attention to collective action and cooperative models. It provides a practical pathway forward that moves beyond the limitations of individual data monetization.


Impact

This comment shifted the discussion from individual rights to collective organizing, influencing other speakers to elaborate on cooperative models and community-based approaches. It helped bridge the gap between theoretical critiques and practical alternatives.


We launched a crowd-sourcing campaign to get linguistic data across languages, across cultures, in which people could kind of come to a portal and contribute data sets… If we can take up capacity-building initiatives and training… it can really, really help.

Speaker

Abhishek Singh


Reason

This comment is valuable because it provides concrete, implementable examples of how local AI can work in practice, moving beyond theoretical discussions to actual policy implementations. It demonstrates that alternative approaches are not just idealistic but practically feasible.


Impact

This grounded the discussion in real-world examples and gave other participants concrete models to reference. It helped shift the conversation from problem identification to solution implementation, influencing the final recommendations about cooperative infrastructure and capacity building.


Overall assessment

These key comments fundamentally shaped the discussion by establishing a progression from problem identification to systemic analysis to practical alternatives. Anita Gurumurthy’s opening comments about the climate-AI investment paradox and the tension between pluralism and generalization set up the core dilemmas. Sarah Nicole’s reframing of AI as amplification rather than disruption provided the theoretical foundation for why radical responses are necessary. The subsequent comments built on this foundation, moving from critique to concrete alternatives like cooperative models, shared infrastructure, and community-based data governance. Together, these comments transformed what could have been a technical discussion about AI optimization into a deeper conversation about power structures, commons governance, and alternative economic models. The discussion evolved from identifying problems with current AI development to articulating a coherent vision for community-controlled, environmentally sustainable AI systems.


Follow-up questions

Are smaller autonomous AI systems even possible, and how can fragmented community efforts be brought together to collaborate?

Speaker

Anita Gurumurthy


Explanation

This addresses the fundamental tension between necessary pluralism and the market trend toward generalized models, which is crucial for enabling local AI development


How do we build our own computational grammar and reject unified global systems while maintaining viability?

Speaker

Anita Gurumurthy


Explanation

This is essential for decolonizing scientific advancement and preserving multilingual societies’ diverse ways of thinking


How can we create a global compute infrastructure facility (CERN model for AI) across countries with multilateral bodies joining to make infrastructure available affordably?

Speaker

Abhishek Singh


Explanation

This could democratize access to expensive AI compute infrastructure that is currently controlled by few companies


How can we establish a global repository of AI applications and use cases that can be shared across geographies?

Speaker

Abhishek Singh


Explanation

This would enable knowledge sharing and prevent duplication of efforts in developing AI solutions for common problems


How do we find a scalable alternative business model to the current data scraping and advertising model?

Speaker

Sarah Nicole


Explanation

Current business models undermine user agency and data ownership, so alternatives are needed for a fair data economy


How do we develop qualitative and quantitative metrics to define safety, responsibility, and privacy in AI systems?

Speaker

Sarah Nicole


Explanation

Clear metrics are needed to move beyond vague principles and create accountability mechanisms


How do we fix liability for individual, collective, and societal harms in complex transnational AI value chains?

Speaker

Nandini Chami


Explanation

Current liability regimes are inadequate for the complexity of AI systems and the difficulty of proving causal links to harms


How do we update product fault liability regimes so the burden of proof is not on affected parties to prove causal links between AI defects and harms?

Speaker

Nandini Chami


Explanation

Given the black box nature of AI technology, current liability frameworks place unfair burden on those harmed by AI systems


How can we work out marketplace mechanisms where data contribution is priced and contributors are incentivized?

Speaker

Abhishek Singh


Explanation

This addresses the fundamental question of how to fairly compensate those whose data contributes to AI development


How do we institute exceptions in IP laws for public interest use of aggregate data by public authorities?

Speaker

Anita Gurumurthy


Explanation

Trade secrets are being used to lock up data that should be available to public transportation, hospitals, and other essential services


How do we protect open source and data commons from free riding by profit-making entities?

Speaker

Anita Gurumurthy


Explanation

Current systems allow companies to profit from commons like Wikipedia without fair compensation to the community


How do we curtail the ‘economy of prompt’ where users perfect monopolistic models through their interactions?

Speaker

Anita Gurumurthy


Explanation

User prompts are continuously improving large language models, further entrenching monopolistic advantages


How can we develop good data governance frameworks through multi-stakeholder approaches?

Speaker

Wai Sit Si Thou


Explanation

Data governance is strategic for both AI and digital economy development, requiring collaborative frameworks


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Lightning Talk #137 Ethical Hacking for a Safer Internet

Lightning Talk #137 Ethical Hacking for a Safer Internet

Session at a glance

Summary

This discussion focused on the legal challenges surrounding ethical hacking and the need for improved legal frameworks to support cybersecurity efforts. Tim Philipp Schafers from Mint Secure and lawyer Carolin Kothe presented their analysis of how different jurisdictions treat ethical hacking versus malicious hacking activities. They began by defining ethical hacking as systematic testing to uncover security vulnerabilities, distinguishing between authorized penetration testing and unauthorized but well-intentioned security research conducted for societal benefit.


The speakers emphasized the critical importance of external hackers in strengthening cybersecurity, noting that the NIS2 directive recognizes that most security disclosures come from external testers. They highlighted how crowdsourced defense works effectively, as demonstrated by open source software development and corporate bug bounty programs. However, they identified a significant problem: most legal systems fail to differentiate between ethical and malicious hacking, creating uncertainty and potential legal risks for security researchers.


The presentation examined various jurisdictional approaches across Europe, noting that Poland stands out as a rare example with explicit statutory support for ethical hacking when done solely to secure systems. Most other countries equate ethical hacking with criminal activity, though some like the US and France have prosecutorial discretion policies that provide safe harbor for responsible disclosure. The speakers outlined four key elements needed for an ideal legal framework: legal certainty, explicit immunity for ethical hackers, reframing of hacking terminology, and clear differentiation between ethical and malicious activities.


They concluded by calling for harmonized international regulations and greater public awareness to support collaboration between ethical hackers, private companies, and governments in strengthening cybersecurity defenses.


Keypoints

## Major Discussion Points:


– **Definition and Types of Ethical Hacking**: The speakers distinguish between malicious hacking and ethical hacking, explaining that ethical hacking involves systematic testing to uncover security vulnerabilities with good intent. They identify two subtypes: authorized ethical hacking (contracted penetration testing, bug bounty programs) and unauthorized ethical hacking done for societal benefit without financial gain.


– **Legal Inconsistencies Across Jurisdictions**: The presentation highlights how different countries treat ethical hacking legally, with most jurisdictions failing to distinguish between ethical and malicious hacking. Poland is cited as a rare positive example with explicit statutory support, while countries like Germany, the US, and France rely on prosecutorial discretion rather than clear legal protections.


– **Current Legal Challenges for Ethical Hackers**: Despite following responsible disclosure practices, ethical hackers face legal uncertainty, potential prosecution, and emotional pressure. Even when not prosecuted, they may face investigations, reputational damage, and restrictions on sharing their findings for educational purposes.


– **Proposed Legal Framework Improvements**: The speakers outline four key elements for better regulation: legal certainty, explicit immunity for responsible disclosure, reframing of hacking in public perception, and clear differentiation between ethical and malicious activities. They also advocate for harmonized international regulations.


– **Need for Collaboration and Public Awareness**: The discussion emphasizes the importance of ethical hackers in cybersecurity, citing examples like the Heartbleed bug discovery and DEF CON voting village, while calling for better collaboration between private sector, ethical hacking community, and government.


## Overall Purpose:


The discussion aims to advocate for legal reform that would protect and encourage ethical hacking by establishing clear legal frameworks that distinguish between beneficial security research and malicious cybercrime. The speakers seek to educate the audience about the value of ethical hacking and promote policy changes that would provide legal certainty for security researchers.


## Overall Tone:


The tone is professional, educational, and advocacy-oriented throughout. The speakers maintain an informative approach while expressing clear frustration with current legal ambiguities. The tone remains consistently constructive, focusing on solutions rather than criticism, and becomes more engaging during the Q&A session where practical concerns about surveillance and brain drain are addressed with empathy and understanding.


Speakers

– **Tim Philipp Schafers**: Co-founder of Mint Secure, specializes in ethical hacking and criminal law in regards to computer crime


– **Carolin Kothe**: Trained lawyer, does software development in her law firm, deals with questions of standardization and citizen knowledge as part of her role at the Liquid Legal Institute


– **Audience**: Multiple audience members asking questions during the Q&A session (roles and expertise not specified)


Additional speakers:


None – all speakers were included in the provided speakers names list.


Full session report

# Legal Challenges and Reform Needs for Ethical Hacking: A Comprehensive Discussion Summary


## Introduction and Context


This discussion brought together Tim Philipp Schafers, co-founder of Mint Secure specializing in ethical hacking, and Carolin Kothe (pronounced “Carolin Kothein Kothe”) from the Liquid Legal Institute, who combines legal expertise with software development experience in standardization and citizen knowledge. Their presentation addressed the critical legal challenges facing ethical hackers and the need for comprehensive legal reform to support cybersecurity efforts while protecting legitimate security researchers.


The speakers presented their analysis through a structured four-step approach: defining ethical hacking and its variants, explaining why ethical hacking is important, examining current legal frameworks across jurisdictions, and proposing solutions for legal reform.


## Defining Ethical Hacking and Its Variants


Carolin Kothe explained that hacking fundamentally involves systematic testing to uncover security vulnerabilities, with the crucial distinction between ethical and malicious hacking lying in three critical factors: intent, authorization, and methods employed. The actual judgment of whether hacking is ethical or malicious depends on these factors rather than the technical actions themselves.


Kothe distinguished between two distinct subtypes of ethical hacking: authorized ethical hacking, which includes contracted penetration testing and corporate bug bounty programs, and unauthorized but benevolent ethical hacking, conducted without individual contracts but motivated by societal benefit rather than financial gain.


Tim Philipp Schafers referenced the established hacker ethic from the 1980s, later extended by groups like the Chaos Computer Club, which established moral principles including breaking systems to enhance security, avoiding data littering, and protecting private information. He provided concrete examples including the discovery of the Heartbleed bug in OpenSSL affecting HTTPS connections, testing conducted at DEF CON voting villages, and responsible information handling. Schafers also mentioned historical examples like the Loft hacker collective’s testimony and Taiwanese activist groups who handled sensitive information responsibly.


## The Critical Importance of Ethical Hacking in Cybersecurity


Both speakers emphasized the indispensable role of ethical hackers in modern cybersecurity. Kothe highlighted that external security researchers provide the majority of security disclosure reports to Community Emergency Response Teams (CERTs), as recognized by regulations like the NIS2 directive. This external perspective proves essential because internal security teams may miss vulnerabilities due to familiarity with their own systems.


Schafers noted that “crowdsource defense works,” referencing the open source software model where distributed scrutiny by many contributors strengthens overall security. Corporate recognition of ethical hacking’s value has grown, with companies increasingly investing in bug bounty programs, though Schafers cautioned that hackers can be “uncautious with their wording” when asking for rewards, potentially creating legal complications.


The speakers emphasized that ethical hacking serves as crucial defense against increasing cybercrime costs, both monetary and in terms of privacy breaches and infrastructure disruption.


## Legal Framework Disparities Across Jurisdictions


The presentation revealed significant inconsistencies in how different countries approach ethical hacking within their legal systems. Kothe’s analysis demonstrated that most jurisdictions fail to distinguish between ethical and malicious hacking, creating uncertainty for security researchers.


Poland emerged as a rare positive example, with explicit statutory support stating that no offense is committed when hacking is conducted “solely on the purpose of securing a system.” Kothe termed this a “unicorn regulation” that represents what comprehensive legal protection could look like, yet remains exceptional.


The complexity varies considerably across jurisdictions. Some countries require bypassing security measures as an objective element of computer crime, while others treat authorization as either an objective element or a justification defense. Countries like Latvia incorporate substantial harm requirements, while Germany and Austria include intent to harm or enrich as subjective elements, which better distinguishes ethical from malicious hacking but still creates uncertainty.


## Current Legal Challenges and Prosecution Approaches


Despite following responsible disclosure practices, ethical hackers face considerable legal uncertainty. Schafers emphasized the emotional pressure security researchers experience when discovering vulnerabilities, lacking clear statutory protection even when acting with beneficial intent.


The speakers identified four approaches jurisdictions currently employ: explicit statutory support (Poland), additional legal requirements favoring ethical hackers, prosecutorial discretion policies creating safe harbors, and reliance on justification defenses.


Countries like the United States and France have implemented prosecutorial discretion policies. Kothe referenced the justice.gov website and French authority safe harbor details, but noted these approaches remain inadequate because security researchers still technically commit crimes and face restrictions on publishing findings for educational purposes.


Even without prosecution, the investigation process creates significant hardship through mental burden, potential reputation damage, and restrictions on sharing research findings that could benefit the broader security community.


## Proposed Solutions for Comprehensive Legal Reform


The speakers outlined their “wish list” of four essential elements for an ideal legal framework. First, legal certainty must be established so security researchers understand how to responsibly report vulnerabilities without fear of prosecution.


Second, explicit immunity should be codified in law rather than relying on prosecutorial discretion. Third, comprehensive reframing of hacking terminology and public perception is necessary to move away from purely negative connotations. Fourth, clear legal differentiation between ethical and malicious actors must be established in statutory frameworks.


The speakers advocated for harmonized international regulation, recognizing that software vulnerabilities affect multiple jurisdictions and fragmented national approaches create unnecessary complexity for companies acting internationally.


## Audience Engagement and Unresolved Implementation Issues


The question-and-answer session revealed additional complexities. One audience member asked about Germany’s progress after a failed referendum, prompting Kothe to explain details about the “not authorized a scene if” provision and burden of proof considerations in German legal reform attempts.


An important concern was raised about whether intent requirements might expose security researchers to intrusive surveillance practices. Another audience member, Janik, questioned potential brain drain effects, suggesting that legal uncertainty might push talented individuals toward black hat activities rather than legitimate white hat security research. Schafers responded by noting that anonymous reporting through onion networks represents one way people navigate these legal uncertainties.


The question of how far ethical hackers can proceed in their testing activities remains unresolved, as hacking involves a series of actions rather than a single act, raising complex questions about which specific actions are covered by legal justifications.


## Areas of Consensus and Approach Differences


Both speakers agreed that ethical hacking provides essential security benefits and should be clearly distinguished from malicious activities. They shared the view that current legal frameworks create harmful uncertainty for security researchers and that comprehensive legal reform including explicit statutory protection is necessary.


Both advocated for harmonized international regulation and recognized that societal perception of hacking needs fundamental change. They agreed that prosecutorial discretion approaches are inadequate solutions.


Differences emerged primarily in emphasis, with Kothe providing detailed technical legal analysis while Schafers focused more on practical implementation needs and public awareness requirements.


## Conclusions and Call to Action


The speakers established that current legal approaches fail to serve either security or justice interests effectively, creating uncertainty for beneficial actors while potentially driving talent toward malicious activities. They called for comprehensive rather than piecemeal reform, addressing statutory protections, public perception, international coordination, and practical implementation challenges.


The speakers concluded with specific action items: collecting and discussing points about better legal frameworks within companies and with lawmakers, sharing ideas about differentiating between malicious and ethical activities, working toward harmonized international regulation, and increasing public awareness through education and discussion.


The discussion highlighted that achieving comprehensive reform will require sustained effort and careful attention to unintended consequences, while recognizing the essential role ethical hackers play in protecting digital infrastructure and systems.


Session transcript

Tim Philipp Schafers: Hello and welcome to our talk Ethical Hacking for a Safer Internet. My name is Tim Philipp Schafers and today we will talk about criminal law in regards of computer crime and I’m the co-founder of Mint Secure. We are also doing ethical hacking and I’m happy to be here today with Carolin Kothe.


Carolin Kothe: My name is Carolin Kothein Kothe, I’m a trained lawyer. I’m also doing the software development in my law firm. I’m also dealing with questions of standardization and citizen knowledge as part of my role at the Liquid Legal Institute. So we will examine today the legal patchwork concerning the treatment of ethical hacking in different jurisdictions and want to kind of show you how a harmonized framework could look like that empowers ethical hackers to strengthen our IT landscape. We will proceed in four steps, which is first defining what hacking and ethical hacking actually means to start with a common ontology for our talk. Then we will continue with kind of emphasizing the importance of external hackers as indispensable and then we will continue showing you the main differences in jurisdictions in Europe. Last but not least we will envision how an ideal legal framework could look like as a start of a little discussion. So what is ethical hacking? Hacking has a negative connotation, a negative narrative to it, but what it actually means is that we just do the systematic test to uncover security vulnerabilities and systems and applications in networks and to judge the actual act we have to look at the intent, we have to look at the authorization, we have to look at the methods that the hacker actually used. So what people usually have in mind when they think of hacking is this kind of malicious act, meaning somebody seeks private gain, sabotage, theft, but there’s also ethical hacking and we can even distinguish ethical hacking in two subtypes. The one that is authorized, meaning companies that actually hire penetration test teams or do bug bounty programs to invite external testers to actually back their defenses and then we have the other even more highly debatable group which doesn’t have these individual contracts but actually is just working without seeking financial benefit but doing it out of society’s reason, society’s interest. And because of that we will actually show you the disclosure policies that all these hackers, no matter what kind of ethical hacking group you belong to, will look like. But first we want to emphasize why we’re actually having this talk. So there’s an increasing surge in cybercrime and with that comes a high increase of costs and we don’t only mean the monetary cost to it but also the intangible risk. And that is actually why the regulators already have recognized it, they have recognized that it’s a need to put pressure on companies to invest in their security systems and especially we have seen this in the NIST2 directive which even states that the majority of disclosure reports come actually from external testers. And the market reinforces this, so there are already plenty of companies that invest heavily in bug bounty programs where they pay those who report responsibly and we also see this with an increase of open source usage. Because open source relies on so many eyes, they take this kind of expertise of different people which know different kind of security vulnerabilities to then build up higher security barriers. So crowdsource defense works and open source is a living proof of that. So already this kind of discussion is going on for quite a while already and to make an example of that I can hand over to Tim to give you one of these examples.


Tim Philipp Schafers: Yeah, thank you very much Carolin Kothe. Actually here you can see a testimony from the Loft hacker collective. It’s kind of the first time where hackers were in direct exchange with politicans and as you can see this is still a while ago and at that time it was kind of the first remarks where it was mentioned that there is certain critical infrastructures, that there is a real harm that can exist there. But actually not that much has changed in regards of how the media perceives hackers In general, as Carolin Kothe mentioned, this is very often connotated with a negative framing. And actually we kind of want to flip that and also want to emphasize that hacking is also a possibility to enhance security. And very often one can hear that hacking is malicious or something, but actually if we look back at the so-called hacker ethic, we see that even within this community there is a huge understanding how to act and how to act morally. Here you can see an excerpt from the so-called hacker ethic, which basically describes how you should work as a real hacker. And there you can see again that, for example, the idea of breaking things to enhance them and to make them even more secure is a very basic principle which is already there. Furthermore, that you should not litter with other people’s data and also use public data and protect private data. So this is really a common ground and understanding. In the 1980s, this was first kind of proposed and discussed and later on it was extended by the Chaos Computer Club, for example, where many people thought about, OK, how can we handle hacking or what is really good hacking in that regard. And to my personal understanding, it’s really important to understand that breaking things always some kind helps of fixing things. We also have a few examples here, which might be familiar for you or not. I just want to briefly mention a few of those things. Actually, there was a so-called Heartbleed bug, which was a security vulnerability within OpenSSL, which is used for transport layer security. And in 2014, there was a serious vulnerability in that software, which is basically used by a lot of web servers on the Internet. Probably when you enter a website and enter HTTPS, this software is used on the server side to encrypt certain connections. And the good thing is that people very often find these bugs, report these bugs, and that they can fix. This is mostly how open source software, for example, is secured. There’s also the principle that you don’t disclose any information about the security vulnerability before it is fixed. This is also closely related to the hacker ethic you have seen before. Furthermore, a second example is, for example, the so-called DEF CON voting village. DEF CON is a security conference in the US. And there is a basic idea that, for example, voting machines are hardly tested by hackers to see whether they are secure or not. And of course, this also helps to enhance the security at that point and to make sure that those components are secured. As Caro mentioned before, for example, the NIST2 directive also aims in the direction of saying, okay, it makes sense to break certain things and fix them afterwards. This is the basic enhancement process, I would say. And the third example here is from a Taiwanese activist group. To me, this is also very important because a lot of people think in regards of hacking always from the technical standpoint. But for a lot of hackers, and also for me personally, hacking also is handling information responsibly. And in this case, for example, people were able to make use of public information and APIs, and made a more user-friendly way to disclose information. This is very often also something that hackers do. So just to give you a few examples, what can be done with hacking, and this is just a short excerpt. There are many more examples where security of software and products were enhanced in the past also by certain people, hacker collectives, and so on. And now I would hand over to Caroline so that we look at certain legal examples.


Carolin Kothe: So after Tim told you about the disclosure policies, you might think that if you follow those policies, you are not treated as a criminal. Yet statutory certainty is quite rare for ethical hackers. Most countries still equate ethical hacking with criminals. And we had a referendum in Germany, which was actually So due to that and due to the fact that usually companies act internationally, meaning their software is internationally used, meaning we have always different jurisdictions affected, we actually had a look into the other countries. And we did found one good example, one rare example in the Polish panel code, which actually explicitly supports ethical hacking in the sense that it says no offense is committed if you do it solely on the purpose of securing a system. And however, this is kind of a unicorn regulation, because other states don’t do this differentiation. They equate ethical hacking with malicious hacking on the first place. So I can hand over to Tim what it actually means in practice, if you equate malicious hacking with ethical hacking.


Tim Philipp Schafers: Yeah, so in general, one potentially can imagine that it’s combined with a lot of emotional pressure also when you find, for example, a certain vulnerability, but you are unsure whether this is fully covered by the law and how to potentially report this. So what we see is that ethical hackers often are threatened by the classical legal system or how the laws are working. And from my perspective, the core question is whether we want this so that also ethical hackers are put under pressure or don’t know how to report certain vulnerabilities, or if it doesn’t make more sense to say, hey, please, please hack public systems to secure them to responsibly report this. There are some community emergency response teams around the world that also receive reports and handle them. And in a few cases, of course, it helps to make systems even more secure. In other cases, there was also the case that certain hackers got a little bit of legal pressure and were not able to disclose or talk a lot about these topics.


Carolin Kothe: So to understand the main differences between the jurisdictions and how they treat ethical hacking, we need to clarify, at least on a brief level first, what actually makes an act a crime and what will be punished and what will be prosecuted. So a crime usually has two conditions to it. The first one is, did you fulfill all the elements of the offense that is stated by the law? And the second one is, is this act deemed lawful or unlawful? And it is unlawful if you lack any kind of legal justification for it, as we mentioned the authorization at the start. So let’s have a look at the main differences in the jurisdictions, starting from the act itself. So actually, we have in every kind of jurisdiction some variance of, I’m assessing, I’m altering, interfering with the system, I’m interfering with data. But what we also have is that some countries, but not all of them, have an additional bypassing of security measures in their samples. And we also have the element of authorization, sometimes as an objective element of the act and sometimes as a justification. And as stated, that makes a huge difference, because one means that even commissioned ethical hackers committed a crime but are justified, and the I-didn’t-commit-a-crime-at-all kind of variation. There’s another issue with the authorization, especially when it comes to third-party systems, because there is a dispute, whose authorization do I actually need to be completely covered? It could be that I’m commissioned by one company, but if I’m accidentally or by intention accessing a third-party system, I might need another system owner’s authorization too. So even commissioned hackers are always in that kind of gray area, which is obviously not what is wanted. You have also countries that have put these additional requirements that kind of put up a higher threshold to it, which is to the benefit of ethical hackers, and that one example would be Latvia, who says you need an extra substantial harm. And this kind of substantial harm, though it’s kind of a vague, ambitious term, because what does substantial actually mean? It does help ethical hackers, because especially if you see it as financial harm, this is usually not fulfilled by ethical hackers, and by that you have this kind of distinction to it. But when we look actually onto the subjective elements of an offense, we actually see that some countries put even a better threshold that even distinguishes more between ethical hacking and malicious attacks, and that is, the subject element usually says you intentionally and knowingly do what is stated in the objective offense, but if you also add the intent to harm someone or the intent to enrich yourself or a third party to the law, which is quite easily done, which was also done in the German referendum, but also for example Austria is doing that, this intent is actually what differentiates the ethical hacker from the malicious attacks, and by that you kind of do this distinguishing, so ideal version of doing it. As stated, even if you meet all these technical requirements, the act itself could still be rendered as lawful if you have a justification reason. And most hackers argue whether it’s a state of emergency for this personal data or there’s a state of emergency because it’s critical infrastructure and we all kind of are dependent on that, and this is kind of highly debatable, because what means immediate? The state has happened already quite a while before, the state is there for quite a while already. And there’s another even severe question to the justification reason argument, because hacking is not just one act, it’s a series of actions, and the question is what of these actions are actually covered by the justification reason? So how far can I as a hacker actually go and how far is too far? What is actually required? But after all these issues, we want to mention one good thing, kind of at least, which is that most countries that till that point still equate ethical hacking and malicious attacks actually do not convict or prosecute. And we see, for example, in the US and in France, that there are public enforcement discretives, like you can actually see on, for example, the USA, on the justice government website, where they state as long as you follow the responsible disclosure guidelines, we won’t prosecute. Or in case of France, if you report to our authority that is meant for security, well, then you have a safe harbor, we won’t tell your name, even if some kind of complaint is filed. As said, you still have committed a crime, and it’s just not kind of prosecuted. And this comes also with a little kind of snippet to it, because what hackers, especially ethical hackers, like to do is use what they have done for educational purposes and kind of publish it, and they are not allowed to do that. As soon as they do, all this kind of on-hold procedure is gone. And that is also not helpful, because we want people to publish what could be a security vulnerability and exchange on that. So to sum it up, we have basically four different legal approaches. We have that explicit statutory support, like in Poland, where we already have in the law this kind of framing of ethical hackers are not seen as criminals, optimal version. Then we have the second kind of favorable version of putting additional requirements to it that are really fulfilled by ethical hackers. Also good, not optimal, because we kind of like that reframing of the first version. And then we have the prosecution directives, meaning, as stated, for example, for France, creating this kind of safe harbor to it. The last one, which is still happening in most of the countries, is the least favorable one, because it lets the hacker rely on justification reasons, let’s see, basically the interpretation of different judges, he never knows what is going to happen. And then we also have the thing that the prosecution investigation is still ongoing, meaning that they might face hard procedures, they might face mental load of legal battles, they might even face reputation loss, which is especially affecting those who have another business as IT researchers, too, to it. And leaving me up to that one, I can hand over to Tim and ask him what his wish list for ideal legal framework would be.


Tim Philipp Schafers: Yeah, actually we thought about, okay, what might be helpful and for better legal framework we have outlined at least four things that are important. On the one hand is legal certainty needs to be established, so what Caro mentioned that in a lot of cases, as a hacker reports something, maybe a case is opened or not, but yeah, it would be great if it would be very clear. that you really know, okay, where is it possible to responsibly report certain security vulnerabilities and how to act in the legal framework. Then there’s another point, explicit immunity. So like we heard about safe harbor regulations, that this is really stated in the law that you are allowed to report certain security vulnerabilities. As mentioned before, a lot of computer emergency response teams around the world say, hey, please report us security vulnerabilities, but in the law, this case is not existing at all. So that is very important that also the lawmaker understands, okay, it makes sense and that ethical hacking helps to secure systems and enhance security of companies and for the society, for our society in general. Then this reframing of hacking so that this is not just a negative approach or that hacking harms certain people or system, but that is also very positive. Also in the media, as mentioned before, you can see that very often the term hacker is connotated negatively, but from our perspective, this must not be the case. It’s more the question how we perceive this and how those people really act. And there’s also a way of acting responsibly. And then the differentiation, as mentioned before, between ethical hacking and malicious actors. This is really important in a lot of cases, not the case in the law itself. So it just describes hacking as a bad thing, which might be something from the past and where we need to reframe this. Then some general actions or something we wish from your side, on the one hand, that you potentially collect this points about a better legal framework, also in discussions within your company, maybe also with lawmakers, that you kind of share the idea and describe why it makes sense to differentiate between malicious activities and ethical activities. Then a harmonized regulation would make sense because even if some countries adapt the change, the problem exists that if you, for example, find a certain security vulnerability in a software, it might be used in a lot of different countries and jurisdictions, which is also a problem because if you, as an ethical hacker, report a certain vulnerability in one country and then you report it in another country and one country has a stricter hacking law, so to say, then you would face legal problems. So it would make a lot of sense to have a harmonization of the regulation and the reporting ways in that regard. And in general, that’s also why we are giving the talk here, is to have a greater public awareness and empathy about those topics, so that it can be discussed. Because the ultimate goal from our perspective is that we really tackle security vulnerabilities, make it even harder for hackers to break systems, and for that a stronger collaboration between the private sector, the ethical hacking community and even the government is needed to enhance the security level. Because from our perspective, nowadays, sometimes they are still in their corner, so maybe the government is saying, hey, we need to prosecute hackers, because as we have seen, cybercrime is a big topic, that the hacking community tries to do certain things, tries to improve software with open source projects, as we have heard, and of course also the private companies have an interest in regards of really prosecute malicious intents, but maybe also, as Caro mentioned, with bug bounty programs, have a reward for ethical hacking and really use it as a driving force, which can help us to secure systems. Yeah, that maybe as an overview. So thank you very much. We would have the possibility for one or two questions, if there are any from the public, so to say. So thank you very much. So are there any questions or examples? So we have one here at the front.


Audience: Thank you. Not really an example, but just a question. See, I gather you are German. Do you have any idea where this is going in Germany? Try that referendum, which didn’t fly, I understand. Any other progress in sight?


Tim Philipp Schafers: Actually, we have a new government and they also put this in the plan for the next year, so to say. So my hope is that over the next couple of years, we will see some progress there. But the current or the last referendum thing now is gone. So it needs to be built up completely new, which is really important for our point of view, because the German law explicitly, yeah, not differentiates between ethical hacking and malicious attempts.


Carolin Kothe: The referendum, I think, ran there as I talked about it. The referendum that was there before the election actually included an exception for people who do it solely for the purpose of securing a system and has this additional intent as a requirement. But it’s a little bit still up to debate if it’s just an acceptable or even ideal solution, because what they did is they just added a paragraph to it and said not authorized a scene if, and that might seem like kind of like simple, like why does it matter? But some argue that this is actually putting a point on the question of who needs to prove what. Do I need now, some ethical hackers read that as do I now need to prove that I didn’t have a malicious intent? And in my view, that is not the case, because in Germany, you have the principle of the prosecutor needing to prove the stuff. And usually you have, when it comes to prosecution and they need to prove if you had a certain intent, then prosecutors will have a hard time struggling that you had this kind of intent of enrichment or intent of harming someone. Especially, there’s one little exception to that, because sometimes ethical hackers are a little bit uncautious with their wording in their reports and ask for, well, I would be happy if you would give me a reward for finding your vulnerability, and that could cause some suspicion. But except that, I think it’s fine.


Audience: Hi, yeah, thanks for the excellent presentation. I already raised my hand like a few minutes ago, and you started answering my question already. But I was wondering about this intent requirement, as you were just talking about, because I was wondering if it doesn’t maybe expose security researchers maybe to intrusive surveillance practices to like figure out if there was malicious intent. I was just wondering if you have any knowledge of something like this going on, or whether this is not possible under the current laws?


Tim Philipp Schafers: Actually, as Caro described, very often there are cases that are opened, and when a case is opened, there’s uncertainty for the people that are affected by that. And that could also mean for security researchers that they might be under surveillance, so to say, because somebody might need to find out, okay, what are they doing, why are they doing this, are they acting on their own, and so on. That’s why we need a clearer regulation on that, to make sure that people are not threatened, that people responsibly can report it and have kind of a peace of mind in what they are doing, because they are securing certain systems which are very important to us.


Audience: That’s why we graded the prosecution approach a little bit lower, because that means that there is already investigation if you have this intent, if you’re acting in good faith, if you have followed all the responsible disclosure guidelines, and that could, in practice, we actually know that this basically is you getting called, what did you do, what was your intention, and if they are then fine with you, then it’s good to go, but that is already causing a hard race for the ethical hacker itself, because he knows he’s part of this prosecution investigation. Hi, I’m Janik, I used to work in the industry, and what I saw at that time when I worked there, that it’s also a matter of brain drain, because people would go rather in the black hat direction and not in the white hat direction, just exclusively working over the onion net or something, would you say that it’s the case for today as well, or is it in a better state?


Tim Philipp Schafers: I mean, in some cases it makes sense to report security vulnerabilities anonymous, because you want to have your name attached to this, I know certain cases where this happened, but from my perspective it’s very sad that things like that are needed, or that security researchers might hide their activity behind the onion network or things like that, because it should be legal, because it really helps us to secure certain systems, and from my perspective it’s really something from the past that you say, okay, this is just illegal activity and needs to be prosecuted, because we have learned a lot through hacking to understanding how the world and systems work and how to improve them, because, I mean, every human makes mistakes, every program or computer can make mistakes, so it makes sense to recognize this and to change it to the better in regards of hacking in general, and maybe also to the law in that case. Okay, I think then we are fine, thank you very much for having us, and have a nice day. Thank you.


C

Carolin Kothe

Speech speed

143 words per minute

Speech length

2100 words

Speech time

877 seconds

Hacking involves systematic testing to uncover security vulnerabilities, with the actual judgment depending on intent, authorization, and methods used

Explanation

Kothe argues that hacking itself is simply the systematic testing of systems to find vulnerabilities, and whether it’s considered ethical or malicious depends on three key factors: the hacker’s intent, whether they have authorization, and what methods they employ.


Evidence

Distinguished between malicious acts (seeking private gain, sabotage, theft) and ethical hacking done for society’s benefit


Major discussion point

Definition and Types of Ethical Hacking


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Tim Philipp Schafers

Agreed on

Ethical hacking provides essential security benefits and should be distinguished from malicious hacking


Ethical hacking can be divided into two subtypes: authorized (contracted penetration testing/bug bounties) and unauthorized but benevolent (done for society’s interest without financial gain)

Explanation

Kothe categorizes ethical hacking into two distinct groups: those who have explicit contracts and authorization from companies through penetration testing or bug bounty programs, and those who work without individual contracts but act in society’s interest without seeking financial benefit.


Evidence

Examples of companies hiring penetration test teams and running bug bounty programs to invite external testers


Major discussion point

Definition and Types of Ethical Hacking


Topics

Cybersecurity | Legal and regulatory


External hackers are indispensable as the majority of disclosure reports come from external testers, as recognized by the NIST2 directive

Explanation

Kothe emphasizes that external hackers play a crucial role in cybersecurity, with most vulnerability disclosures coming from outside testers rather than internal security teams. This importance has been formally recognized by regulatory frameworks.


Evidence

NIST2 directive explicitly states that the majority of disclosure reports come from external testers


Major discussion point

Importance and Benefits of Ethical Hacking


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Tim Philipp Schafers

Agreed on

Ethical hacking provides essential security benefits and should be distinguished from malicious hacking


Crowdsourced defense works effectively, with open source software serving as proof that many eyes make security stronger

Explanation

Kothe argues that distributed security testing through multiple contributors is highly effective, using the open source software model as evidence that having many different experts examine code leads to better security outcomes.


Evidence

Open source software relies on many eyes and different expertise to build higher security barriers, with increased open source usage demonstrating this principle


Major discussion point

Importance and Benefits of Ethical Hacking


Topics

Cybersecurity | Infrastructure


Companies are increasingly investing in bug bounty programs and recognizing the value of responsible vulnerability reporting

Explanation

Kothe points out that the market is already demonstrating the value of ethical hacking through increased corporate investment in bug bounty programs that reward responsible disclosure of vulnerabilities.


Evidence

Market reinforcement through companies investing heavily in bug bounty programs that pay those who report responsibly


Major discussion point

Importance and Benefits of Ethical Hacking


Topics

Cybersecurity | Economic


Ethical hacking helps tackle cybercrime’s increasing surge and associated costs, both monetary and intangible risks

Explanation

Kothe argues that ethical hacking is essential for addressing the growing cybercrime problem, which brings not only direct financial costs but also intangible risks that affect society broadly.


Evidence

Increasing surge in cybercrime with high increase of costs, leading regulators to recognize the need for companies to invest in security systems


Major discussion point

Importance and Benefits of Ethical Hacking


Topics

Cybersecurity | Economic


Most countries equate ethical hacking with criminal hacking, creating statutory uncertainty for ethical hackers

Explanation

Kothe explains that the majority of legal systems fail to distinguish between ethical and malicious hacking, treating all hacking activities as criminal regardless of intent or purpose. This creates legal uncertainty for those trying to improve security.


Evidence

Statutory certainty is quite rare for ethical hackers, with most countries still equating ethical hacking with criminals


Major discussion point

Legal Framework Disparities Across Jurisdictions


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Tim Philipp Schafers

Agreed on

Current legal frameworks are inadequate and create uncertainty for ethical hackers


Poland provides a rare positive example with explicit statutory support, stating no offense is committed when done solely for system security purposes

Explanation

Kothe highlights Poland as an exceptional case where the legal system explicitly supports ethical hacking by providing clear statutory language that exempts security-focused hacking from criminal prosecution.


Evidence

Polish panel code explicitly supports ethical hacking by stating no offense is committed if done solely for securing a system, described as a ‘unicorn regulation’


Major discussion point

Legal Framework Disparities Across Jurisdictions


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Tim Philipp Schafers

Agreed on

Legal reform should include explicit statutory protection and clear differentiation


Legal frameworks differ in their elements: some require bypassing security measures, others have authorization as objective elements vs. justifications, creating confusion about whose authorization is needed for third-party systems

Explanation

Kothe explains that different jurisdictions structure their computer crime laws differently, with some including security bypassing as an element and others treating authorization differently. This creates particular confusion when ethical hackers might access third-party systems while working on commissioned projects.


Evidence

Some countries have additional bypassing of security measures requirements, and authorization sometimes appears as objective element vs. justification, with disputes over whose authorization is needed for third-party systems


Major discussion point

Legal Framework Disparities Across Jurisdictions


Topics

Legal and regulatory | Jurisdiction


Some countries like Latvia add substantial harm requirements, while others like Germany and Austria include intent to harm as subjective elements, better distinguishing ethical from malicious hacking

Explanation

Kothe describes how some jurisdictions have developed better legal frameworks by adding requirements that help distinguish ethical hackers from malicious actors, either through harm thresholds or intent requirements that ethical hackers typically don’t meet.


Evidence

Latvia requires extra substantial harm; Germany and Austria include intent to harm or enrich as subjective elements, which differentiates ethical hackers from malicious attacks


Major discussion point

Legal Framework Disparities Across Jurisdictions


Topics

Legal and regulatory | Cybersecurity


Even when following responsible disclosure policies, ethical hackers lack statutory certainty and may still be treated as criminals

Explanation

Kothe emphasizes that even ethical hackers who follow all best practices for responsible disclosure still face legal uncertainty and potential criminal treatment because the laws themselves don’t provide clear protection.


Evidence

Following disclosure policies doesn’t guarantee protection from being treated as criminals, with statutory certainty being quite rare


Major discussion point

Current Legal Challenges and Prosecution Approaches


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Tim Philipp Schafers

Agreed on

Current legal frameworks are inadequate and create uncertainty for ethical hackers


Some countries like the US and France have prosecution discretion policies creating safe harbors, but hackers still technically commit crimes and cannot publish their findings for educational purposes

Explanation

Kothe explains that while some countries have created practical protections through prosecutorial discretion, these approaches still treat ethical hacking as criminal activity and restrict hackers’ ability to share their knowledge publicly for educational purposes.


Evidence

US justice department website states they won’t prosecute if responsible disclosure guidelines are followed; France provides safe harbor through their security authority but hackers still committed crimes and cannot publish findings


Major discussion point

Current Legal Challenges and Prosecution Approaches


Topics

Legal and regulatory | Cybersecurity


Disagreed with

– Tim Philipp Schafers

Disagreed on

Adequacy of prosecution discretion approaches vs. statutory reform


Legal investigations can cause hardship for ethical hackers even when they ultimately face no prosecution

Explanation

Kothe points out that even when ethical hackers are not ultimately prosecuted, the investigation process itself creates significant burden and stress for individuals who are trying to help improve security.


Evidence

Prosecution investigation procedures can cause mental load of legal battles and reputation loss, especially affecting IT researchers


Major discussion point

Concerns About Implementation and Surveillance


Topics

Legal and regulatory | Human rights


Current prosecution approaches still involve investigation procedures that create mental burden and potential reputation loss for ethical hackers

Explanation

Kothe argues that even the more favorable prosecution discretion approaches still subject ethical hackers to investigation procedures that can cause significant personal and professional harm through mental stress and damage to their reputation.


Evidence

Investigation procedures might face hard procedures, mental load of legal battles, and reputation loss, especially affecting those with IT research businesses


Major discussion point

Concerns About Implementation and Surveillance


Topics

Legal and regulatory | Human rights


Agreed with

– Tim Philipp Schafers

Agreed on

Current legal frameworks are inadequate and create uncertainty for ethical hackers


T

Tim Philipp Schafers

Speech speed

141 words per minute

Speech length

2060 words

Speech time

872 seconds

The hacker ethic from the 1980s establishes moral principles including breaking things to enhance security, not littering with others’ data, and protecting private information

Explanation

Schafers argues that the hacking community has long-established ethical principles that guide responsible behavior, emphasizing that true hackers follow moral guidelines about how to conduct their activities responsibly.


Evidence

Hacker ethic from 1980s describes breaking things to enhance and secure them, not littering with other people’s data, using public data and protecting private data, later extended by Chaos Computer Club


Major discussion point

Definition and Types of Ethical Hacking


Topics

Cybersecurity | Sociocultural


Agreed with

– Carolin Kothe

Agreed on

Ethical hacking provides essential security benefits and should be distinguished from malicious hacking


Breaking systems helps fix them, as demonstrated by examples like Heartbleed bug discovery, DEF CON voting village testing, and responsible information handling by activist groups

Explanation

Schafers provides concrete examples to illustrate how the process of finding and responsibly disclosing vulnerabilities leads to improved security across various domains, from web encryption to voting systems to public information access.


Evidence

Heartbleed bug in OpenSSL (2014) found and fixed through responsible disclosure; DEF CON voting village tests voting machine security; Taiwanese activist group made user-friendly disclosure of public information through APIs


Major discussion point

Definition and Types of Ethical Hacking


Topics

Cybersecurity | Infrastructure


Agreed with

– Carolin Kothe

Agreed on

Ethical hacking provides essential security benefits and should be distinguished from malicious hacking


Ethical hackers face emotional pressure and uncertainty when finding vulnerabilities due to unclear legal coverage

Explanation

Schafers explains that the current legal uncertainty creates significant psychological stress for ethical hackers who discover vulnerabilities but are unsure whether reporting them might lead to legal consequences.


Evidence

Ethical hackers are threatened by classical legal system and face uncertainty about whether vulnerability reporting is fully covered by law


Major discussion point

Current Legal Challenges and Prosecution Approaches


Topics

Legal and regulatory | Human rights


Agreed with

– Carolin Kothe

Agreed on

Current legal frameworks are inadequate and create uncertainty for ethical hackers


Legal certainty must be established so hackers know where and how to responsibly report vulnerabilities

Explanation

Schafers argues that clear legal frameworks are essential so that ethical hackers can understand exactly what is permitted and have confidence in their ability to report security issues without legal risk.


Evidence

Computer emergency response teams around the world receive reports and handle them, but legal framework doesn’t explicitly support this


Major discussion point

Proposed Solutions for Legal Framework Reform


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Carolin Kothe

Agreed on

Legal reform should include explicit statutory protection and clear differentiation


Explicit immunity should be codified in law, not just stated by computer emergency response teams

Explanation

Schafers emphasizes that legal protection for ethical hackers needs to be formally written into law rather than just being policy statements from technical organizations, ensuring that lawmakers understand the value of ethical hacking.


Evidence

Computer emergency response teams say to report vulnerabilities, but this case doesn’t exist in law at all


Major discussion point

Proposed Solutions for Legal Framework Reform


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Carolin Kothe

Agreed on

Legal reform should include explicit statutory protection and clear differentiation


Disagreed with

– Carolin Kothe

Disagreed on

Adequacy of prosecution discretion approaches vs. statutory reform


Reframing of hacking is needed to move away from purely negative connotations in media and public perception

Explanation

Schafers argues that society needs to change how it perceives hacking, moving beyond the purely negative framing to recognize the positive contributions that ethical hackers make to security and society.


Evidence

Media very often connotates the term hacker negatively, but this perception needs to change based on how people actually act


Major discussion point

Proposed Solutions for Legal Framework Reform


Topics

Sociocultural | Cybersecurity


Clear differentiation between ethical hacking and malicious actors should be established in legal frameworks

Explanation

Schafers advocates for legal systems that can distinguish between hackers who help improve security and those who cause harm, rather than treating all hacking activities as inherently criminal.


Evidence

Current laws often just describe hacking as bad without differentiation, which is something from the past that needs reframing


Major discussion point

Proposed Solutions for Legal Framework Reform


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Carolin Kothe

Agreed on

Legal reform should include explicit statutory protection and clear differentiation


Harmonized international regulation is necessary since software vulnerabilities affect multiple jurisdictions

Explanation

Schafers explains that because software is used globally, ethical hackers need consistent legal protection across countries to avoid facing different legal risks when reporting the same vulnerability that affects multiple jurisdictions.


Evidence

Software vulnerabilities might be used in different countries and jurisdictions, creating problems when one country has stricter hacking laws than another


Major discussion point

Proposed Solutions for Legal Framework Reform


Topics

Legal and regulatory | Jurisdiction


Greater public awareness and collaboration between private sector, ethical hacking community, and government is needed to enhance overall security

Explanation

Schafers calls for breaking down silos between different stakeholders and fostering collaboration to improve cybersecurity, arguing that currently these groups often work in isolation when they should be working together.


Evidence

Currently stakeholders are sometimes in their corners – government prosecuting hackers, hacking community improving open source, private companies using bug bounties – but stronger collaboration is needed


Major discussion point

Proposed Solutions for Legal Framework Reform


Topics

Cybersecurity | Legal and regulatory


A

Audience

Speech speed

170 words per minute

Speech length

298 words

Speech time

105 seconds

Intent requirements may expose security researchers to intrusive surveillance practices to determine malicious intent

Explanation

An audience member raises concern that legal frameworks requiring proof of intent could lead to invasive surveillance of security researchers to determine whether their motivations were malicious or benevolent.


Major discussion point

Concerns About Implementation and Surveillance


Topics

Human rights | Privacy and data protection


Current legal uncertainty may cause brain drain, with researchers potentially moving toward black hat activities rather than white hat ethical hacking

Explanation

An audience member suggests that the legal risks and uncertainties facing ethical hackers might drive talented security researchers away from legitimate white hat activities toward illegal black hat hacking where they can work anonymously.


Evidence

People would rather work in black hat direction exclusively over onion networks rather than white hat direction


Major discussion point

Current Legal Challenges and Prosecution Approaches


Topics

Cybersecurity | Legal and regulatory


Agreements

Agreement points

Ethical hacking provides essential security benefits and should be distinguished from malicious hacking

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Hacking involves systematic testing to uncover security vulnerabilities, with the actual judgment depending on intent, authorization, and methods used


External hackers are indispensable as the majority of disclosure reports come from external testers, as recognized by the NIST2 directive


The hacker ethic from the 1980s establishes moral principles including breaking things to enhance security, not littering with others’ data, and protecting private information


Breaking systems helps fix them, as demonstrated by examples like Heartbleed bug discovery, DEF CON voting village testing, and responsible information handling by activist groups


Summary

Both speakers agree that ethical hacking serves a vital security function and should be clearly differentiated from malicious activities based on intent, methods, and outcomes. They provide evidence of its effectiveness and established ethical principles.


Topics

Cybersecurity | Legal and regulatory


Current legal frameworks are inadequate and create uncertainty for ethical hackers

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Most countries equate ethical hacking with criminal hacking, creating statutory uncertainty for ethical hackers


Even when following responsible disclosure policies, ethical hackers lack statutory certainty and may still be treated as criminals


Ethical hackers face emotional pressure and uncertainty when finding vulnerabilities due to unclear legal coverage


Current prosecution approaches still involve investigation procedures that create mental burden and potential reputation loss for ethical hackers


Summary

Both speakers agree that existing legal systems fail to provide adequate protection for ethical hackers, creating uncertainty and stress even for those following best practices.


Topics

Legal and regulatory | Cybersecurity


Legal reform should include explicit statutory protection and clear differentiation

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Poland provides a rare positive example with explicit statutory support, stating no offense is committed when done solely for system security purposes


Legal certainty must be established so hackers know where and how to responsibly report vulnerabilities


Explicit immunity should be codified in law, not just stated by computer emergency response teams


Clear differentiation between ethical hacking and malicious actors should be established in legal frameworks


Summary

Both speakers advocate for comprehensive legal reform that provides explicit statutory protection for ethical hackers and establishes clear legal distinctions between ethical and malicious activities.


Topics

Legal and regulatory | Cybersecurity


Similar viewpoints

Both speakers believe in the effectiveness of collaborative, distributed approaches to cybersecurity and see market validation through increased corporate investment in ethical hacking programs.

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Crowdsourced defense works effectively, with open source software serving as proof that many eyes make security stronger


Companies are increasingly investing in bug bounty programs and recognizing the value of responsible vulnerability reporting


Greater public awareness and collaboration between private sector, ethical hacking community, and government is needed to enhance overall security


Topics

Cybersecurity | Economic


Both speakers recognize that the global nature of software and cybersecurity requires harmonized international legal approaches rather than fragmented national regulations.

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Harmonized international regulation is necessary since software vulnerabilities affect multiple jurisdictions


Legal frameworks differ in their elements: some require bypassing security measures, others have authorization as objective elements vs. justifications, creating confusion about whose authorization is needed for third-party systems


Topics

Legal and regulatory | Jurisdiction


Both speakers believe that societal perception of hacking needs to change and that current prosecutorial discretion approaches are insufficient because they still treat ethical hacking as criminal and restrict educational sharing.

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Reframing of hacking is needed to move away from purely negative connotations in media and public perception


Some countries like the US and France have prosecution discretion policies creating safe harbors, but hackers still technically commit crimes and cannot publish their findings for educational purposes


Topics

Sociocultural | Legal and regulatory


Unexpected consensus

Prosecution discretion approaches are inadequate despite being more favorable than criminalization

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Some countries like the US and France have prosecution discretion policies creating safe harbors, but hackers still technically commit crimes and cannot publish their findings for educational purposes


Legal investigations can cause hardship for ethical hackers even when they ultimately face no prosecution


Explanation

It’s somewhat unexpected that both speakers would criticize what might seem like progressive approaches (prosecutorial discretion) as still inadequate. This shows their commitment to fundamental legal reform rather than accepting partial solutions.


Topics

Legal and regulatory | Cybersecurity


The importance of educational sharing and publication of security findings

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Some countries like the US and France have prosecution discretion policies creating safe harbors, but hackers still technically commit crimes and cannot publish their findings for educational purposes


Greater public awareness and collaboration between private sector, ethical hacking community, and government is needed to enhance overall security


Explanation

The emphasis on the right to publish and share security research findings for educational purposes represents an unexpected consensus on the importance of knowledge dissemination beyond just vulnerability reporting.


Topics

Cybersecurity | Human rights


Overall assessment

Summary

There is strong consensus between the two main speakers on the fundamental issues: ethical hacking provides essential security benefits, current legal frameworks are inadequate and harmful, and comprehensive legal reform with explicit statutory protection is needed. They also agree on the need for international harmonization and societal reframing of hacking.


Consensus level

Very high consensus between the main speakers, with audience questions reinforcing concerns about current legal approaches. This strong agreement suggests a well-developed shared understanding of the problems and solutions in this field, which could facilitate coordinated advocacy for legal reform.


Differences

Different viewpoints

Adequacy of prosecution discretion approaches vs. statutory reform

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Some countries like the US and France have prosecution discretion policies creating safe harbors, but hackers still technically commit crimes and cannot publish their findings for educational purposes


Explicit immunity should be codified in law, not just stated by computer emergency response teams


Summary

While Kothe acknowledges prosecution discretion as a partial solution, Schafers emphasizes the inadequacy of this approach and the need for explicit legal immunity. Kothe presents it as one of four approaches while Schafers argues it’s insufficient because it still treats ethical hacking as criminal.


Topics

Legal and regulatory | Cybersecurity


Unexpected differences

Scope of surveillance concerns in intent-based legal frameworks

Speakers

– Audience
– Tim Philipp Schafers
– Carolin Kothe

Arguments

Intent requirements may expose security researchers to intrusive surveillance practices to determine malicious intent


Legal investigations can cause hardship for ethical hackers even when they ultimately face no prosecution


Explanation

An audience member raised concerns about surveillance implications of intent-based frameworks, which the speakers had not fully addressed despite advocating for intent-based legal distinctions. This revealed a potential tension between their proposed solutions and privacy concerns.


Topics

Human rights | Privacy and data protection | Legal and regulatory


Overall assessment

Summary

The discussion showed minimal direct disagreement between the main speakers, who were largely aligned in their goals. The primary tension was between different approaches to legal reform rather than fundamental disagreements about objectives.


Disagreement level

Low disagreement level among main speakers, with most differences being matters of emphasis rather than substance. The audience questions revealed some unaddressed concerns about implementation details, but overall there was strong consensus on the need for legal reform to protect ethical hackers. This high level of agreement suggests the speakers were presenting a unified advocacy position rather than debating competing approaches.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers believe in the effectiveness of collaborative, distributed approaches to cybersecurity and see market validation through increased corporate investment in ethical hacking programs.

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Crowdsourced defense works effectively, with open source software serving as proof that many eyes make security stronger


Companies are increasingly investing in bug bounty programs and recognizing the value of responsible vulnerability reporting


Greater public awareness and collaboration between private sector, ethical hacking community, and government is needed to enhance overall security


Topics

Cybersecurity | Economic


Both speakers recognize that the global nature of software and cybersecurity requires harmonized international legal approaches rather than fragmented national regulations.

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Harmonized international regulation is necessary since software vulnerabilities affect multiple jurisdictions


Legal frameworks differ in their elements: some require bypassing security measures, others have authorization as objective elements vs. justifications, creating confusion about whose authorization is needed for third-party systems


Topics

Legal and regulatory | Jurisdiction


Both speakers believe that societal perception of hacking needs to change and that current prosecutorial discretion approaches are insufficient because they still treat ethical hacking as criminal and restrict educational sharing.

Speakers

– Carolin Kothe
– Tim Philipp Schafers

Arguments

Reframing of hacking is needed to move away from purely negative connotations in media and public perception


Some countries like the US and France have prosecution discretion policies creating safe harbors, but hackers still technically commit crimes and cannot publish their findings for educational purposes


Topics

Sociocultural | Legal and regulatory


Takeaways

Key takeaways

Ethical hacking should be legally distinguished from malicious hacking based on intent, authorization, and methods used


Current legal frameworks in most countries treat ethical and malicious hacking equally, creating uncertainty and potential criminalization of beneficial security research


External ethical hackers are essential for cybersecurity, with the majority of vulnerability disclosures coming from external testers


Poland provides the best legal model with explicit statutory support for ethical hacking when done solely for system security purposes


Four legal approaches exist: explicit statutory support (optimal), additional requirements favoring ethical hackers, prosecution discretion policies, and reliance on justification defenses (least favorable)


Legal uncertainty may cause brain drain from white hat to black hat activities and discourage beneficial security research


Harmonized international regulation is necessary since software vulnerabilities affect multiple jurisdictions


Resolutions and action items

Collect and discuss points about better legal frameworks within companies and with lawmakers


Share ideas about differentiating between malicious and ethical activities to promote understanding


Work toward harmonized international regulation for vulnerability reporting


Increase public awareness and empathy about ethical hacking through education and discussion


Foster stronger collaboration between private sector, ethical hacking community, and government to enhance security


Unresolved issues

Germany’s new government plans to address ethical hacking legislation but timeline and specific approach remain uncertain


Debate continues over whether intent requirements place burden of proof on ethical hackers


Concerns about potential intrusive surveillance of security researchers to determine intent remain unaddressed


Question of how far ethical hackers can go in their testing activities and what actions are covered by legal justifications


Uncertainty about whose authorization is needed when accessing third-party systems during security research


Issue of ethical hackers being unable to publish findings for educational purposes under current prosecution discretion policies


Suggested compromises

Prosecution discretion policies that create safe harbors for ethical hackers who follow responsible disclosure guidelines (as implemented in US and France)


Adding substantial harm requirements to legal frameworks to create higher thresholds that favor ethical hackers


Including intent to harm or enrich as subjective elements in laws to better distinguish ethical from malicious hacking


Creating explicit exceptions in law for those acting solely to secure systems while maintaining overall computer crime protections


Thought provoking comments

We can even distinguish ethical hacking in two subtypes. The one that is authorized, meaning companies that actually hire penetration test teams or do bug bounty programs… and then we have the other even more highly debatable group which doesn’t have these individual contracts but actually is just working without seeking financial benefit but doing it out of society’s reason, society’s interest.

Speaker

Carolin Kothe


Reason

This distinction is crucial because it identifies the core legal challenge – while authorized ethical hacking has some legal protection through contracts, unauthorized ethical hacking done for societal benefit exists in a legal gray area. This nuanced categorization moves beyond the simple ‘good hacker vs bad hacker’ narrative to reveal the complexity of motivations and legal standings.


Impact

This comment established the fundamental framework for the entire discussion. It shifted the conversation from a binary view of hacking to a more sophisticated understanding that would inform all subsequent legal analysis. The presenters repeatedly returned to this distinction when discussing different jurisdictions and legal approaches.


Most countries still equate ethical hacking with criminals… And we did found one good example, one rare example in the Polish panel code, which actually explicitly supports ethical hacking in the sense that it says no offense is committed if you do it solely on the purpose of securing a system. And however, this is kind of a unicorn regulation, because other states don’t do this differentiation.

Speaker

Carolin Kothe


Reason

This observation is particularly insightful because it demonstrates that legal frameworks CAN distinguish between ethical and malicious hacking, but most choose not to. The term ‘unicorn regulation’ effectively captures how rare progressive legal thinking is in this area, highlighting the gap between what’s possible and what’s implemented.


Impact

This comment served as a pivotal moment that transitioned the discussion from theoretical concepts to concrete legal realities. It provided hope (Poland’s example) while emphasizing the widespread problem, setting up the subsequent detailed analysis of different jurisdictional approaches.


There’s another even severe question to the justification reason argument, because hacking is not just one act, it’s a series of actions, and the question is what of these actions are actually covered by the justification reason? So how far can I as a hacker actually go and how far is too far?

Speaker

Carolin Kothe


Reason

This comment reveals a sophisticated understanding of the practical complexities that legal frameworks fail to address. It moves beyond theoretical discussions to the granular reality of how ethical hacking actually works – as a process involving multiple steps, each potentially requiring separate legal justification.


Impact

This observation deepened the technical legal analysis and highlighted why simple legal fixes are insufficient. It demonstrated that even well-intentioned legal protections may be inadequate because they don’t account for the multi-step nature of security research, adding complexity to the discussion of ideal legal frameworks.


I was wondering about this intent requirement… because I was wondering if it doesn’t maybe expose security researchers maybe to intrusive surveillance practices to like figure out if there was malicious intent.

Speaker

Audience member


Reason

This question introduced an unexpected dimension – the potential for legal protections themselves to create new problems. It showed sophisticated thinking about unintended consequences and how attempts to protect ethical hackers might paradoxically harm them through surveillance.


Impact

This question elevated the discussion by introducing the concept that legal solutions might create new problems. It prompted the speakers to acknowledge that even ‘better’ legal approaches (like prosecution discretion) still involve investigations that can harm ethical hackers, reinforcing their argument for clearer statutory protections.


What I saw at that time when I worked there, that it’s also a matter of brain drain, because people would go rather in the black hat direction and not in the white hat direction, just exclusively working over the onion net or something.

Speaker

Audience member (Janik)


Reason

This comment introduced a critical societal consequence that hadn’t been explicitly discussed – that unclear legal frameworks may actually push talented individuals toward malicious activities. It connected legal policy to broader cybersecurity outcomes in a concrete way.


Impact

This observation added urgency to the discussion by suggesting that poor legal frameworks don’t just harm individual ethical hackers, but may actively contribute to cybercrime by driving talent toward illegal activities. It reinforced the speakers’ arguments about the societal benefits of clear legal protections.


Overall assessment

These key comments transformed what could have been a straightforward legal presentation into a nuanced exploration of complex policy challenges. The speakers’ sophisticated categorization of ethical hacking types and jurisdictional approaches provided a solid analytical framework, while the audience questions introduced unexpected dimensions like surveillance concerns and brain drain effects. Together, these comments revealed that the issue extends far beyond simple legal reform – it involves balancing security needs, individual rights, societal benefits, and unintended consequences. The discussion evolved from describing the problem to exploring why solutions are complex and why the stakes are higher than initially apparent, ultimately making a compelling case for urgent, thoughtful legal reform.


Follow-up questions

What is the current status and future progress of ethical hacking legislation in Germany following the failed referendum?

Speaker

Audience member


Explanation

The audience member specifically asked about progress in Germany after the referendum didn’t pass, and while Tim mentioned the new government has plans, the specific timeline and approach remain unclear


Do intent requirements in ethical hacking laws expose security researchers to intrusive surveillance practices to determine malicious intent?

Speaker

Audience member


Explanation

This question addresses a potential unintended consequence of legal frameworks that require proving intent, which could lead to privacy violations for legitimate security researchers


Is there currently a brain drain problem where potential ethical hackers choose black hat activities over white hat due to legal uncertainties?

Speaker

Janik (audience member)


Explanation

This question explores whether unclear legal frameworks are pushing talented individuals toward illegal hacking activities rather than legitimate security research, which would be counterproductive to cybersecurity goals


How can harmonized international regulation be achieved given the complexity of different legal systems and jurisdictions?

Speaker

Tim Philipp Schafers and Carolin Kothe


Explanation

While they identified the need for harmonized regulation, the practical steps and mechanisms for achieving international coordination on ethical hacking laws were not detailed


What constitutes ‘substantial harm’ in jurisdictions like Latvia that use this threshold, and how can this vague term be better defined?

Speaker

Carolin Kothe


Explanation

Carolin noted that ‘substantial harm’ is an ambiguous term that helps ethical hackers but lacks clear definition, which could lead to inconsistent application


How far can ethical hackers go in their testing activities when relying on justification reasons, and what specific actions cross the line?

Speaker

Carolin Kothe


Explanation

This addresses the practical boundaries of ethical hacking activities and what constitutes acceptable versus excessive testing when operating under legal justifications


Whose authorization is actually required when ethical hackers access third-party systems during commissioned testing?

Speaker

Carolin Kothe


Explanation

This legal gray area affects even commissioned ethical hackers and needs clarification to provide proper legal protection


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #123 Responsible AI in Security Governance Risks and Innovation

WS #123 Responsible AI in Security Governance Risks and Innovation

Session at a glance

Summary

This discussion, moderated by Yasmin Afina from the United Nations Institute for Disarmament Research (UNIDIR), focused on responsible AI governance in security contexts and the critical role of multi-stakeholder engagement. The session was part of UNIDIR’s roundtable on AI security and ethics (RAISE), established in partnership with Microsoft to bridge global divides and foster cooperation on AI governance issues. Three expert panelists provided opening remarks: Dr. Jingjie He from the Chinese Academy of Social Sciences emphasized the importance of inclusive, multi-stakeholder approaches and highlighted AI’s positive applications in satellite remote sensing for conflict monitoring, while noting challenges like adversarial attacks. Michael Karimian from Microsoft outlined industry’s crucial role in establishing norms and safeguards, emphasizing transparency, accountability, due diligence throughout the AI lifecycle, and proactive collaboration to reduce global capacity disparities. Dr. Alexi Drew from the International Committee of the Red Cross advocated for comprehensive lifecycle management approaches to AI governance, arguing that ethical and legal considerations must be integrated at every stage rather than treated as afterthoughts.


The discussion addressed several critical concerns raised by participants, including the need for AI content authentication to prevent misinformation and violence, the risks of AI misalignment in military contexts where commanders may rely on AI systems under pressure, and questions about responsibility for mitigating AI risks in developing countries with limited technological control. All panelists agreed that responsibility for AI governance is shared among all stakeholders—governments, industry, civil society, and individuals—though each has distinct roles and capabilities. The conversation concluded with optimism that innovation and security can coexist when guided by proper values and governance frameworks, emphasizing that responsible AI development requires collective global effort rather than competitive approaches.


Keypoints

## Major Discussion Points:


– **Multi-stakeholder governance of AI in security contexts**: The discussion emphasized the critical need for inclusive engagement across various stakeholders (governments, industry, civil society, academia) to effectively govern AI applications in international peace and security, with particular focus on platforms like UNIDIR’s RAISE initiative.


– **Industry responsibility and proactive engagement**: Extensive discussion on how technology companies must take active roles in establishing norms, implementing due diligence processes, ensuring transparency and accountability, and contributing technical expertise throughout the AI lifecycle rather than treating governance as an afterthought.


– **Lifecycle management approach to AI governance**: A central theme focusing on the necessity of integrating ethical, legal, and technical governance considerations at every stage of AI development – from initial design and data selection through validation, deployment, and eventual decommissioning – rather than applying governance as a final checkpoint.


– **AI authenticity and content verification challenges**: Participants raised concerns about the security implications of AI-generated content that cannot be easily distinguished from human-created content, discussing the need for technical solutions like digital signatures to identify AI-generated materials and prevent misuse for disinformation or conflict instigation.


– **Military applications and human-machine interaction risks**: Discussion of specific challenges in military contexts, including the risk of AI systems becoming misaligned during battlefield use, commanders’ over-reliance on AI decision-support systems under pressure, and the importance of maintaining compliance with international humanitarian law in AI-enabled military operations.


## Overall Purpose:


The discussion aimed to explore responsible AI governance frameworks for international peace and security through a multi-stakeholder lens, examining how different actors (UN institutions, industry, civil society, military) can collaborate to ensure AI technologies enhance rather than undermine global stability and security.


## Overall Tone:


The discussion maintained a professional, collaborative, and constructive tone throughout. It began with an informative and academic approach during the introductory presentations, then became more interactive and practically-focused during the Q&A session. Despite addressing serious security concerns and potential risks, the conversation remained optimistic about the possibility of achieving responsible AI governance through collective action. The tone was notably inclusive, with moderators actively encouraging participation from diverse geographic and sectoral perspectives, and speakers consistently emphasizing shared responsibility rather than assigning blame.


Speakers

– **Yasmin Afina** – Researcher from the United Nations Institute for Disarmament Research (UNIDIR), moderator of the session on responsible AI in security, governance, and innovation


– **Jingjie He** – Dr. from the Chinese Academy of Social Sciences, researcher working on AI and satellite remote sensing projects


– **Michael Karimian** – Representative from Microsoft, involved in the roundtable for AI security and ethics (RAISE)


– **Alexi Drew** – Dr. from the International Committee of the Red Cross (ICRC), expert on lifecycle management approach to AI governance in security


– **Bagus Jatmiko** – Commander in the Indonesian Navy, researcher in AI and information warfare within the military domain and defense sector


– **Audience** – Multiple audience members who asked questions and made comments during the session


**Additional speakers:**


– **Francis Alaneme** – Representative from the .ng domain name registry


– **George Aden Maggett** – Judge at the Supreme Court of Egypt and honorary professor of law at Durham University, UK


– **Rowan Wilkinson** – From Chatham House (mentioned in chat/questions but did not speak directly in the transcript)


Full session report

# Comprehensive Report: Responsible AI Governance in Security Contexts – Multi-Stakeholder Perspectives and Collaborative Frameworks


## Executive Summary


This discussion, moderated by Yasmin Afina from the United Nations Institute for Disarmament Research (UNIDIR), examined responsible artificial intelligence governance in international peace and security contexts. The session formed part of UNIDIR’s Roundtable on AI Security and Ethics (RAISE), a collaborative initiative established in partnership with Microsoft to foster international cooperation on AI governance issues.


The discussion brought together perspectives from academia, industry, humanitarian organisations, military institutions, and the judiciary to explore multi-stakeholder approaches to AI governance challenges. Through interactive polling, structured presentations, and Q&A dialogue, participants examined questions about responsibility, accountability, and practical implementation of AI governance frameworks whilst addressing concerns about technical limitations, power imbalances, and real-world consequences of AI deployment in security contexts.


## Session Context and UNIDIR/RAISE Introduction


Yasmin Afina opened by explaining UNIDIR’s role as the UN’s dedicated research institute on disarmament, established during the Cold War to provide neutral space for dialogue on security issues. She positioned the RAISE initiative as continuing this tradition by creating depoliticised forums for AI governance discussions that can overcome competitive dynamics and distrust hindering international cooperation.


The moderator emphasised that whilst AI presents opportunities for enhancing international peace and security, it also introduces complex challenges requiring collaborative approaches across traditional boundaries. She noted the session’s connection to broader international efforts, including ongoing discussions around the Global Digital Compact and other UN-sponsored platforms addressing AI governance.


## Interactive Opening: Stakeholder Perspectives


Using Slido polling (code 179812), Afina engaged participants on two key questions about AI’s role in international peace and security and the multi-stakeholder community’s effectiveness in addressing governance challenges.


Participant responses highlighted diverse concerns including:


– Censorship and surveillance capabilities


– Fake news and misinformation


– Data privacy violations


– Facial recognition at borders


– Autonomous weapons systems


– Cybersecurity threats


These responses established the broad scope of AI governance challenges that would be addressed throughout the session.


## Expert Panel Presentations


### Academic Perspective: Dr. Jingjie He, Chinese Academy of Social Sciences


Dr. He emphasised the critical importance of multi-stakeholder approaches, arguing that technological challenges inherently require interdisciplinary solutions. She highlighted positive applications of AI in peace and security contexts, specifically referencing Amnesty International’s Darfur project as a successful example. This initiative used Element AI technology with 29,000 volunteers to analyse satellite imagery for conflict monitoring, demonstrating AI’s potential as a tool for humanitarian purposes.


However, Dr. He acknowledged significant technical challenges, particularly adversarial attacks that make AI systems fragile and governance discussions complex. She introduced the concept of AI as both a “force multiplier” and “threat multiplier,” noting that poorly designed systems create risks for both civilian populations and military forces themselves.


Regarding transparency, Dr. He expressed scepticism about algorithm openness due to intellectual property concerns and industry practices of protecting core technologies. She concluded by emphasising shared responsibility for AI governance whilst acknowledging the need for better knowledge sharing between technology developers and decision-makers, particularly in military contexts.


### Industry Perspective: Michael Karimian, Microsoft


Karimian outlined industry’s role in establishing norms and safeguards for responsible AI deployment in security contexts. He emphasised that companies are uniquely positioned to identify risks early in development processes and have obligations under UN guiding principles to ensure their products are not used for human rights abuses.


He stressed industry responsibility extends beyond compliance to proactive engagement in norm-setting and standard development. Karimian advocated for clear standards ensuring AI systems used in security applications are transparent about their capabilities and limitations, with robust accountability mechanisms including documentation, monitoring, and auditing capabilities.


Addressing global capacity disparities, Karimian noted the importance of proactive collaboration to reduce inequalities in AI governance capabilities between developed and developing nations. He suggested industry has a role in supporting capacity-building initiatives, particularly where regulatory frameworks are still emerging.


### Humanitarian Perspective: Dr. Alexi Drew, International Committee of the Red Cross


Dr. Drew presented a comprehensive lifecycle management framework for AI governance, arguing that governance must be integrated at every stage from initial design through decommissioning. She identified three critical stages:


1. **Development stage**: Ensuring compliance requirements like International Humanitarian Law are built in from the outset


2. **Validation stage**: Addressing risks of localization where systems may not work as intended in different contexts


3. **Deployment stage**: Managing inscrutability risks where users may not understand system limitations


Dr. Drew emphasised that systems should be designed, trained, and tested with compliance requirements integrated rather than retrofitted, preventing governance from becoming a “checkbox exercise.” She highlighted that all stakeholders possess various levers of influence, including participation in standard-setting organisations and procurement strategies to enforce governance requirements.


Addressing innovation concerns, Dr. Drew rejected the notion that responsible AI governance requires trade-offs between innovation and security, characterising this as a design challenge rather than a zero-sum game. She also stressed the importance of training military users to understand AI system capabilities, limitations, and failure modes.


## Audience Q&A and Discussion


### Content Authenticity Challenges


Francis Alaneme from the .ng domain name registry raised concerns about AI-generated content that cannot be distinguished from human-created materials, highlighting security implications of AI-generated video content being used to spread false information and potentially instigate violence.


Dr. Drew responded by mentioning the Content Authenticity Initiative (CAI) as an example of industry efforts to develop technical solutions for content authentication, whilst acknowledging implementation challenges in balancing comprehensive coverage with practical feasibility.


### Military Applications and Human-Machine Interaction


Commander Bagus Jatmiko, an Indonesian Navy officer and researcher in AI and information warfare, raised concerns about AI systems becoming misaligned during battlefield use. He introduced the concept of AI systems becoming “psychopathic” in their tendency to provide answers users want to hear rather than accurate assessments, warning this could be dangerous when commanders are under pressure and may accept AI-generated answers that confirm existing beliefs rather than challenge assumptions.


This highlighted the critical importance of training and education for AI system users in high-stakes environments where consequences of poor decision-making can be severe.


### Global Power Imbalances and Accountability


Judge George Aden Maggett from the Supreme Court of Egypt raised fundamental questions about responsibility and accountability, particularly regarding power imbalances between technology companies in developed countries and affected populations in developing nations. His intervention connected abstract policy considerations to real-world consequences, including civilian casualties in current conflicts involving AI-enabled weapons systems.


### Algorithm Transparency and Openness


Rowan Wilkinson from Chatham House asked about recent policy shifts regarding AI openness, prompting discussion about balancing transparency requirements with security and commercial considerations. This highlighted ongoing tensions between demands for accountability and practical constraints on algorithm disclosure.


## Key Themes and Takeaways


### Shared Responsibility Framework


All speakers agreed that responsibility for AI governance is distributed across stakeholders rather than concentrated in any single entity. This encompasses governments, industry, civil society, international organisations, and individuals, though speakers emphasised different implementation mechanisms.


### Multi-Stakeholder Engagement as Foundation


Strong consensus emerged that effective AI governance requires inclusive participation from diverse stakeholders, bringing together different perspectives and expertise to address complex technological challenges. Platforms like UNIDIR’s RAISE initiative provide valuable neutral spaces for knowledge-sharing that can transcend geopolitical constraints.


### Lifecycle Management Approach


Both industry and humanitarian perspectives converged on integrating governance considerations throughout the entire AI system lifecycle. This approach prevents governance from becoming mere compliance whilst ensuring ethical and legal considerations are substantively integrated into system design and operation.


### Technical Implementation Challenges


Several technical challenges remain unresolved, including practical implementation of content authentication systems, addressing adversarial attacks on AI systems used for peace and security monitoring, and developing effective mechanisms for preventing AI misalignment in operational contexts.


### Sustainability and Resource Concerns


Dr. He specifically noted funding challenges facing platforms like RAISE, emphasising that effective governance requires sustained commitment and resources from all stakeholders. This challenge could significantly impact long-term effectiveness of collaborative governance efforts.


## Conclusion


This discussion demonstrated both the complexity of AI governance challenges in security contexts and the potential for collaborative solutions. The consensus on fundamental principles of shared responsibility, multi-stakeholder engagement, and lifecycle management provides a foundation for developing governance frameworks that can enhance international peace and security whilst ensuring responsible AI development and deployment.


However, unresolved questions about sustainability, implementation, and accountability highlight significant work remaining to translate these principles into effective practice. The session’s combination of technical expertise, practical experience, and moral urgency suggests that effective AI governance will require continued collaboration across diverse stakeholder groups, sustained commitment to addressing global inequalities, and ongoing adaptation to evolving technological capabilities.


The optimistic perspective that innovation and security can coexist when guided by proper governance frameworks provides hope that these challenges can be addressed through collective effort rather than competitive approaches, though significant technical and institutional challenges remain to be resolved.


Session transcript

Yasmin Afina: Good afternoon from Oslo or good morning, wherever you are tuning in from. My name is Yasmin Afina, researcher from the United Nations Institute for Disarmament Research. And I have the pleasure of moderating today’s session on responsible AI in security, governance, and even innovation. For those who are joining us in person, may I please highly encourage you to come to the front to this beautiful, almost roundtable to allow us to have a free-flowing roundtable discussion because this session forms part of our project related to the roundtable on AI security and ethics and in the spirit of having a roundtable, I do highly encourage everyone in the room who have just joined us today to join us in the front because I would like this to be very interactive and highly engaging. And for those who are joining online, thank you very much for joining us online, wherever you are. And as we are using Zoom, I do encourage you to use the raise hand function if you would like to take the floor, as again, this is a very highly interactive discussion. So again, my name is Yasmin Afina and I am very pleased to be joined today by three excellent speakers who, for those who are in the room, we do not see them yet, but for those online, you will see them. Dr. Jingjie He from the Chinese Academy of Social Sciences, Michael Karimian from Microsoft, and Dr. Alexey Drew from the ICRC. And before we get into this, into a kick-off remarks from our excellent panelists, I would like to spend five minutes to introduce you a little bit to the roundtable for AI, security and ethics, and my institute, the United Nations Institute for Disarmament Research. So, at a glance, UNIDIR is an autonomous institution within the United Nations. You can think of us like a think tank within the UN ecosystem. We’re independent from the Secretariat, and we’ve been established in 1980 at the height of the Cold War to ensure that the deliberations of states are well-informed and evidence-based in the area of disarmament. Of course, today, the landscape of disarmament is much different from what it was in 1980, and so we are conducting evidence-based policy research, we’re conducting multi-stakeholder engagement, and we want to make sure that we also facilitate dialogue where there is none, including on sensitive issues such as AI insecurity. So, one of the priority areas of UNIDIR and our work is related to AI and autonomy insecurity and defense, including the military domain, and what we’ve noticed is that in the light of this technology’s highly unique nature, we understood very quickly the importance of multi-stakeholder engagement and perspectives to obtain input on the implications of AI for international peace and security. So, we saw the need to provide a platform for open, inclusive, and meaningful dialogue. We saw this need as well to warrant public trust and legitimacy, and to ensure that these discussions are not just a one-way discussion, but actually are coming both from the bottom-up and from the top-up. but also from the top-down approach. We also want to make sure that we improve cross-disciplinary literacy and so on and so forth. You may see on the slides a number of very different incentives as to why the multi-stakeholder in perspective is indeed important on this issue. So that is why in March 2024, Unidire joined forces with Microsoft and in partnership with a series of other stakeholders. The establishment of the roundtable for AI security and ethics, ARRAISE. Our idea is to bring together experts and thought leaders from all around the world. So we have, for example, experts from China, from Russia, from the United States and United Kingdom, but also from Namibia, Ecuador, Kenya, India. We really want to make sure that we bridge divides and we bridge the conversation when there is none on these issues of AI and security. We aspire to lay the foundation for robust global AI governance grounded in cooperation, transparency, and mutual learning. With the idea that we should overcome any sense of competitiveness or distrust, and where there is any need for building distrust, this is where it would be. We also would like to use RAISE to foster and facilitate compliance with international law and ethical norms in the light of their importance in the age of innovation, warfare, and security, and destabilization. Finally, we would like to complement and reinforce responsible and ethical AI practices in the security and defense domains. Again, in an area where we are hoping to disrupt monopolies in the hands of the few, and to ensure that all voices are heard from all layers of society. Before we hear from our excellent panelists, I would like to provide an opportunity for both participants who are joining online, but also in person to share their thoughts via Slido. For those who are unfamiliar with Slido, Please ask our technicians to share the Slido presentation on screen. Thank you very much. First, before we start, I wanted to get your sense of what you think AI and international peace and security means for you. There is no right or wrong answer. And for that, what I would encourage you to do is to go on slido.com and put in the code 179812. For those who see the screen, use the QR code to join the conversation. And you will see a text box where you’ll be able to provide your input on what you think AI and international peace and security means for you. And again, no right or wrong answer. It really is for us to understand your thoughts and your perspectives and to really set the scene and see where things are at. Because, of course, it is important for us to share with you the work that we’re doing. But it’s also important for us to engage with the incredibly diverse IGF community to see what you think about this issue. So I will leave this poll open for a few minutes while you put in your contributions on what you think AI and international peace and security means for you. And there should be the results showing in. Perhaps if I can ask our technicians to see if there’s any input that has been added. So it is not showing on the screen, but oh, sorry, if you can please come back to Slido. Sorry, I’m bugging the technicians with my. I can see for those who are joined online that there’s quite a few responses already. So we see, for example, censorship fake news that has been generated using AI. I see that AI could be used for good or for bad, and I really appreciate this balanced approach to looking at AI for international peace and security. I see issues related to data privacy and treats on human intelligency. I also see the use of AI in the military, law enforcement, and how they are used responsibly in their respective fields, facial recognition at borders, countering the proliferation of AI-enabled weapons systems, and I also see automated target selection. So a very wide range of responses, and please keep adding your responses to this question. May I please ask our colleagues from IT to share again Slido, and this time for the next question. Oh perfect, now we can see them. So I think that this is great. Thank you very much to our IT team, and I’ve heard that there was a connectivity issue, so please bear with us as we navigate the hybrid space of discussions for this session. So now I’m going to get us to the second question. What should be the role of the multi-stakeholder community in the governance of AI and international peace and security? For those who are on Slido, it’s the same link. If you just refresh your page, or it should be refreshing on its own. If you can please add your responses, and they should start appearing. And for those who have just joined us, may I encourage you to open slido.com using your laptop or your phone by scanning the QR code to provide your input on what you think should be the role of the multistakeholder community in the governance of AI and international peace and security. So I see already one response on agreeing on and implementing norms. I do encourage everyone to keep sharing their discussions and their reflections on what they think should be the role of the multistakeholder community because that will also help us at UNIDIR to inform our work on this and how to better engage with the multistakeholder community. So I see big commitments and I would love to hear your thoughts when we’ll open the floor on what sort of commitments do you think that the multistakeholder community could have a role in. I also see industry and AI and perhaps again when we’ll open the floor for discussions, I would love to hear your thoughts on what they mean. And once again, for those who are joining us in the room physically, may I encourage you highly to come into the stage to join us in the middle to enable us to have a roundtable discussion so that it is interactive. I see a lot of input into the slido and I do see, for example, trust building standards proposed solutions, technical standards again, actionable legislation, responsible and peace. So I do appreciate you really putting a lot of input into these discussions and now that we’ve had this little warm-up exercise, I will please ask our IT colleagues to get back us to the PowerPoint for me to introduce once again our speakers for today’s discussions. So the way it will work for the rest of the session as we have 45 minutes. I will be sharing, I will be providing the floor to three speakers who are joining us online for kick-off remarks, who are supposed to be introductory and generate more questions and answers perhaps on very select issues related to AI, international peace and security. And then I will open the floor for both for those who are joining us online but also in person for a discussion on perhaps reactions to what you’ve heard, perhaps to elaborate a bit on the questions that you have, on the answers that you have shared with us and also perhaps if you have any questions for our panellists and speakers who are joining us today. So again for those who have just joined, we are joined virtually by three excellent speakers Dr Jingjie He from the Chinese Academy of Social Sciences, Dr Alexi Drew from the International Committee of the Red Cross and Michael Karimi on Microsoft. For those who are joining in person, I assure you they are online and they should be appearing on the screen when it is their turn to speak. So now may I please turn online and ask Jingjie to provide us with her opening remarks. May I please ask the IT colleagues to show Jingjie on the screen. Jingjie, you have the floor. Thank you very much. Thank you Yasmin. Very nice to be meeting you all and thank you for the invitation. Always a pleasure to join the conversation just to see your face on screen.


Jingjie He: So I think the inclusive engagement across stakeholders is essential for the effective global governance of artificial intelligence and the main reason will be that technological challenges I believe can often be addressed through technological solutions. However, the identification of the true nature of artificial intelligence and artificial intelligence The nature of these challenges requires an interdisciplinary and multi-stakeholder approach. Such an inclusive approach ensures that a wide range of knowledge, expertise, and perspectives, often complementary in nature, are taken into account in shaping responsible and equitable understandings, norms, policies for AI development and deployment. So here I want to take the opportunity to really underscore the importance of the UN-sponsored platform, such as UNODIR’s RISE that Jasmine just introduced, and the IGF, and the Global Digital Compact, etc. So these platforms play a critical role in enabling multi-stakeholder engagements. What sets them apart from more state-centric mechanisms is their unique ability to provide neutral, depoliticized, and inclusive spaces. So within those platforms, knowledge-sharing and confidence-building can take place beyond the constraints of geopolitical tensions and national interests, allowing for more constructive, balanced, and therefore more promising outcomes. But of course, one dilemma that I want to point out is that such platforms, especially like RISE, do face funding issues and questions about how to make the project more sustainable. I remember the first time I attended RISE and Jasmine was sharing the concern that this project should be more sustainable and stuff like that. I do believe that Jasmine and Michael have done a great job. supporting this program, but I do believe that this should be a more collective effort for all of us to bring resources and contribute to this project and these communities. So Yasmin also asked me to provide some concrete examples for how AI fosters international peace and security. So one of my recent projects is about AI and satellite remote sensing. So satellite remote sensing has been increasingly recognized as a critical tool for international peace and security. In recent years there has been a growing interest in applying AI and machine learning to enhance analytical efficiency of satellite imageries. So one example is Amnesty International in collaboration with a company called Element AI as well as almost 29,000 volunteers. So they develop tools to automatically analyze satellite imagery for monitoring conflicts in Darfur. So this is just one of the many examples of how AI can empower satellite imagery and benefits for international peace, security, and non-proliferation missions, etc. Of course, I always care about the challenges. So one potential challenge is, as my previous research shows, that there’s always a challenge of adversarial attacks in such systems which will make the system more vulnerable and our discussion more interesting and challenging. So I will stop for now and I will be happy to answer questions. Jasmin?


Yasmin Afina: Thank you very much, Jingjie He, for this very short and crisp but also very provocative introductory remarks and I do appreciate you noting as well the difficulty that the UN is currently facing on fundraising and of course the UNIDIR as a voluntary funded institute we do rely on voluntary contributions so I do appreciate you noting as well the dire situation that we’re facing today to enable such dialogue from happening but also I appreciate you sharing the importance of AI to enhance the ability to analyze and to monitor conflicts including by civil society organizations so it does show the potential of AI to enhance international peace and security while of course being balanced by the risks that may resurface including with regards to adversarial attacks on these AI technologies so I think that one key aspect that you also shared with us is the importance of engaging all kinds of stakeholders and we’re very fortunate today to be joined by Michael Karimian from Microsoft Michael, may I please ask now that you provide us with your key coffee marks particularly to see what do you think is the role of industries in supporting responsible AI practices for international peace and security Michael, over to you


Michael Karimian: Thank you Yasmin, it’s a pleasure to join you all and thank you Yasmin not just for facilitating today’s discussion but of course of being an essential partner in the work of the roundtable for AI security and ethics as we’ve heard and as I think we already know AI is and will rapidly reshape international security dynamics and the governance frameworks needed to ensure its responsible use urgently require quite robust multi-stakeholder engagement just as Jingjie outlined and industry in particular has a critical role to play obviously as developers and deployers of AI technology but also I think as proactive stakeholders in establishing norms and standards and safeguards to mitigate risks associated with AI in security contexts. And the roundtable for AI security and ethics has already quite clearly highlighted that while states and international organizations are vital in setting norms and regulations, industry in particular has quite practical contributions to governance, which I think can’t be overstated. So for example, industry actors often are the first to encounter and understand AI risks and vulnerabilities, in part due to their direct involvement in developing and deploying these technologies. That can put industry players in a unique position to provide expertise on technical feasibility, operational impacts, and risk mitigation strategies, which are of course essential for effective governance. And through RAISE, industry stakeholders, including Microsoft, have already identified several key contributions that can be made. Firstly, transparency and accountability. Industry must develop and adhere to clear standards that ensure AI systems used in security applications are transparent in their capabilities and limitations, with accountability mechanisms clearly articulated. And that involves quite robust documentation practices, as well as continuous monitoring and the capability to audit AI systems, which together I think provide greater predictability and trust. Second and relatedly is the topic of due diligence. The Secretary General’s upcoming report and also ongoing UN General Assembly discussions will likely continue to underscore the importance of due diligence, because industry actors have a responsibility to implement robust due diligence processes across the AI lifecycle, from design and development through to deployment and eventual decommissioning. And this aligns closely with lifecycle management approaches already being emphasized by both UNIDIR and the ICRC in its submission to the Secretary General. and others. This is the topic of proactive collaboration. Industry should actively contribute technical expertise and capacity-building initiatives, particularly in regions where regulatory frameworks are still emerging. Effective governance, of course, requires global equity in knowledge and resources, and so initiatives such as RAISE, but also RE-AIM, the responsible AI in the military domain process, we see them promoting practical and inclusive governance strategies which serve as a strong foundation. And industry collaboration through those platforms can, of course, further amplify these efforts. I think on the topic of reducing disparities and capacity-building and knowledge transfer, industry really does have a significant technical and expertise resources that are needed to support governance, civil society, and international organizations, particularly those from the global south, in understanding and assessing and mitigating AI risks. So strengthening global capacity is really key to ensuring inclusive governance and avoiding exacerbating already existing inequalities and security capabilities. I guess if we look ahead, industry’s engagement should continue to be structured, it should continue to be sustained, and it should, of course, be substantive. And this means participating in and supporting frameworks established through the United Nations and other multilateral venues, as well as initiatives such as RAISE to collectively shape responsible AI governance and security. And I think that we can ensure that our collective or collaborative efforts lead not only to innovative but enhanced global stability, resilience, and trust. I look forward to the discussion.


Yasmin Afina: Thank you very much, Michael, for, again, for a very comprehensive overview of what you think should be the role of industries in promoting and enhancing responsible AI practices for international peace and security, both as developers but also as deployers. And I do appreciate your remarks as well, your points on The industry is needing to be proactive actors to mitigate the risks and harms that may enter from these technologies. I also note from your remarks the importance of implementing feasible and effective risk mitigation mechanisms throughout the life cycle of technologies for AI and for international peace and security. And we’re very fortunate to be joined by Dr. Alexi Drew from the International Committee of the Red Cross who’s been our expert within the race, who’s been promoting relentlessly on the importance of a life cycle management approach to the governance of AI and security. So now may I please ask Alexi to come to the floor and also share her remarks on this point. Thank you very much, Alexi.


Alexi Drew: Thank you very much, Yasmina. Thank you, Michael, for setting the stage for me. It makes it a lot easier for me to continue my crusade to make life cycle management a feature that everyone is aware of and be more aware of the necessity of why it needs to be approached and understood and actually engaged with rather than as a secondary feature. And that secondariness is actually one of the key reasons why life cycle management is critical because we’ve been talking about governance quite a bit. We’ve been talking about the need to be responsible and ethical and how we design, develop and deploy these systems. But governance is not something that can be added on after the fact. It’s not an afterthought. It needs to be something which is designed to fit in each stage of the life cycle. Now, for the purposes of this discussion, I’m going to break life cycles down into very simple segments. In this case, we’re going to talk about the development stage, the validation stage and the deployment stage. And I thought it would be helpful if I gave you a particular series of risks with hypothetical context where those things are actually producing risks now so we can understand why governance at each stage is important. So one of these risks that I see them and the ICRC approaches them is the trend that we have tried to engage towards a localisation of aid. and assistance is reversed through the utilization of systems which are by their default and their design not local. So for example at the development stage you might use data which is taken from the global north, train a model which is designed to be deployed in the global south, it doesn’t reflect the realities. A predictive model for example based on this for delivery of humanitarian aid is going to prioritize the delivery of aid to certain groups as opposed to others based upon the data that has been selected for it which is not applicable to the local context and that’s going to effectively create a compounding problem. At the validation stage localization could also create problems if it’s not properly taken into account with regards to localization. If you test something outside of the local context in which you intend to deploy it you’re not actually testing for the scenario and circumstance and the context which the thing is going to be used in. So your ability to be sure that it’s delivering as expected is undermined, you’re ignoring the social, economic, political dynamics of the context in which something is going to be ultimately deployed in. So our clean test beds which might be suitable for some circumstances are not likely to be suitable for those if you try and use the same system in multiple places. At the deployment stage for example we might be using aid algorithms that worked in one context that systematically exclude marginalized communities in another. So we might have a refugee processing system which trained on one population works perfectly fine but fails catastrophically when applied in a different way, slightly different linguistic characteristics, social characteristics, economic needs and requirements. When you take these across these localized issues at the development stage, the validation and deployment stage this is a compounding of problems and risks which you can’t then remove by a set of governance which is attached to the end of a life cycle. It’s something which has to be addressed at each of these stages to ensure that these risks are avoided and not compounded. There’s also the problem of inscrutability. Now inscrutability is almost the opposite of transparency. and explainability that Michael mentioned earlier. But sometimes inscrutability is a design choice that takes place at a certain point or several points in the life cycle. At the development stage, rather than choosing something which is open-source understood as a model, you might choose a proprietary algorithm which is more niche, more sophisticated, but a complex neural network selected because it seems more appropriate and more complicated. When actually a simpler, more explainable model could do the job, it’s going to introduce inscrutability into the system at the development stage. Further on at the stage where you’re actually validating or generating a model, you’re then going to create a system which is so complex that not only can the end users, the subjects of the system not understand decisions being made, but the users themselves may not be able to, particularly if these users haven’t been the designers, they simply purchased the systems from those who asked to procure it for them. What’s the real world impact of this? Well, it means that humanitarians or aid suppliers on the ground can’t explain to individuals why the decisions are being made as they are. They can’t explain why aid isn’t being delivered to one group while it is to another. They can’t explain why some resources are available in one place or not another. That undermines trust in both the humanitarian sector, but also in the systems being used, which further means that in the long term, this life cycle of a redeployment redesign is going to have less than an effective impact on the very communities and the very peace building that it’s designed to develop. And the final point I’d raise is that life cycles are often, and we use the term cycle, but what do we mean by cyclical? And what does that actually imply to how things are used? Well, the problem is, is that if you look at a life cycle as a series of stages that are begun at one end and produce a tour at the other, and then perhaps cycle round again, it seems like a conveyor belt. It could be seen and operated on operationally by the designers, procurers, and the ultimate deployers of these systems as a series of check boxes that you move from one stage to the other once certain things have been completed. But what that means is that we have. rather than a series of checks and balances and means of ensuring that these risks are not compounded, we have a series of things which is simply checked off as complete without sufficient evidence to the fact without the ability to understand is this system suitable for what it’s being used for. And when that’s then recycled and the requirements might be changed and this tool is deployed in a different context for a different purpose, and then we find ourselves further compounding the issues that we saw before. So what I would like to say is what we need to be aware of with this life cycle finally taking away from this is that if we are to ensure that these systems are being used in a manner which is humane, ethical and principled and adding to our security and building peace rather than the creating or recreating rather the conditions that have led to insecurity, unethical practice and a risk to both civilians, combatants and other already highly impacted and at risk individuals, we need to ensure that not only do we have a shared understanding of how these tools are made on the different stages of their life cycle, we need to understand and come up with a means of technical, ethical and humanitarian governance which intersects with all of these stages effectively. And I’ll leave it there and look forward to your questions.


Yasmin Afina: Thank you very much, Alexey, for again this very comprehensive overview of why the life cycle management approach to the governance of AI is indeed important. I particularly like the way that you ended the discussions and your remarks by noting that this is a prerequisite to ensure that these technologies indeed will build peace instead of exacerbating the sources of insecurity and instability. So on that note we have a bit around 20 minutes I would say for an open discussion. I would highly encourage for those who are both in the room in Oslo but also who are joining us virtually, to ask questions to our panelists, but also building on the Slido discussions that we had earlier, where we collected your responses of what AI, international peace and security means, but also the role of the multi-stakeholder community. I would encourage you to also take the floor to elaborate a bit more on these answers. But also if you have anything else to add based on, for example, we heard from Alex the importance of local contexts. How is AI being deployed and used, for example, in your respective regions or states or your organization to build peace and to enhance international peace and security? So on that note, I would like to open the floor now for those who are joining in person and online. For those who are online, I will keep an eye on the Zoom. But for those who are joining in person, I believe there’s a microphone on the side for those who are joining from the floor, or perhaps from those who are joining on the center table, if they would like to take the floor. I think there are microphones in front of you. So on that note, I’m opening the floor now and perhaps give a few seconds as well for you to collect your thoughts or your questions. The gentleman on my left, I think you have a question. Please introduce yourself and share your name, where you’re coming from. And also if you have a question, if it is directed specifically at a speaker, please do so as well. Thank you very much.


Audience: Okay, thank you very much. My name is Francis Alaneme. I’m from the .ng domain name registry. And so it’s just more of a comment. So I know AI is widely used and AI is something that a lot of persons are jumping into and it’s flying everywhere. A lot of contents are generated with AI. And I think, so part of the things that… The AI adoption is driving us, is trying to make imaginary things come real and I think part of the algorithm should look at ways to actually You know make AI Generated contents have more of a signature that okay people can actually easily say or can actually identify what AI is Generated and what humans actually generated, you know, when you look at some video contents You see that a lot of you know, there are video contents that you see and you think they are real and you know Those kind of contents can be used to pass some kind of false informations Can be used to actually instigate some kind of em, you know violence in some places, you know where you see some kind of contents that Actually not, you know culture friendly or something that can actually instigate some kind of thoughts in mind of people so I think there should be more of em, you know, that’s kind of a Signature or that kind of a you know thing to identify AI generated contents and human Contents. Thank you


Yasmin Afina: Thank you very much sir for Outlining the importance of ensuring some sort of signature or at least means to verify What is AI generated? What is not AI generated and perhaps the security implications of not being able to differentiate between the two? I see that we have a hand raised virtually by Bagus Chatmiko who I know is joining us from very late from Indonesia perhaps may I ask our IT technicians to


Bagus Jatmiko: Display him on the screen and Bagus, please. You have the floor now. Thank you very much Bagus if you can please unmute yourself and Turn on your camera if you if you would like to intervene. Okay, can you hear me now? Yeah, we can hear you. Okay, so I don’t know whether. Thank you. Okay, thank you. Sorry. Oh, there you go. Sorry. Sorry for the connection and also the technical issues. So I see a very familiar faces in this conference and also would like to bring some concern. I also would like to maybe bring some question to the panelists also in the way that I’m working in the defense sector. And AI is being used exponentially. And I also talked about this during the ICRC conference virtually last week, if I’m not mistaken. And I bring concern about how AI is being used in a way that some of the commander or the user within the military domain is unaware of the possibility that AI might be corrupted during the use. Like what we call as the emergent misalignment or there’s also the misalignment with the system itself. And I also would like to bring the concern about the possibility of the maybe it’s not possibility. This the tendency of AI being psychopath in a way that I would also maybe provide the answers that the users would like to bring or would like to seek. And being in the battlefield, that kind of tendency would be very, in a way, very risky and maybe dangerous and how they can actually misalign the user or the commander in the battlefield. In taking what you may call the decision that might increase the risk for the humanity and also for the civilian population And this goes to my question, how would you all maybe provide the attention and maybe the focus on how the AI is being used, especially in the military domain? This is for all the panelists. And how would you maybe encourage more into the use of AI, responsible AI within the military domain? Because if I relate it to the humanitarian law, somehow in the fog of war, in the condition of uncertainty, mostly commander would like to see the quick answers provided by AIDSS. And maybe they just ignore the possibility or the existence of law or humanitarian law in this case. Thank you, Vargus. Perhaps, may I please ask that you also introduce yourself for those who are not familiar with your work and where you’re coming from? Yeah, sorry for not introducing myself. So my name is Commander Bagus Jadmiko, and I’m actually Indonesian Navy officers. And I’m also a researcher in AI and information warfare that bring me to the attention that the use of AI within the military domain and defense sector. Thank you.


Yasmin Afina: Thank you very much, Bagus. I see a gentleman here would like to ask a question or perhaps show some comments, and then I’ll get back to our panelists for some reactions or answers to the questions. Gentleman, please. Good evening, everybody. Allow me to raise a very short question in the beginning. Who


Audience: is responsible for the mitigation of AI risks? This is a very short question for me. Is it high tech big companies who are creating AI and developing AI? Because it is not in the hand of the government, especially in the developing countries right now. So let me have the big issue here. While I’m following the rapid development advancement of AI, especially in fields which are related to security, I am terrorized, you know, because we are I am not going to mention or to name any country now, but we can see how AI is being used in current ongoing wars. And the victims behind the use of AI technology in autonomous weapon, for example, how civilians are being killed without accountability. So for this reason, looking from a developing country’s perspective, which have nothing to do in their hands right now, it is all in the hands of the big tech companies which exist in the powerful countries. So this is my issue here, how we are going to mitigate ourselves this risk. Thank you. May I please ask that you introduce yourself? Sorry, can you please introduce yourself in the microphone just to sort of we know who you are, and where you’re coming from? My name is George Aden Maggett. I am a judge at the Supreme Court of Egypt and I am also an honorary professor of law at Durham University, the UK.


Yasmin Afina: Thank you. Perfect. Thank you very much, sir. So, in the interest of time, I realize that we have 10 to 15 minutes left, so I just want to make sure and check in the room virtually or online, virtually or in person, if there’s any further questions or comments or remarks for our panelists or to add to the discussions today. If not, I know that, Alexi, you’ve also put in the chat that there is an ongoing project on adding signatures to AI-generated content, the Content Authenticity Initiative, which you might be interested in, and perhaps, Alexi, you’ll be able to elaborate a bit more. Before I give the floor back to our panelists, I do note a question from Rowan Wilkinson from Chatham House. Hi, Rowan. Many policymakers are discussing the importance of AI openness in civilian contexts, including in meeting safety commitments through OSS and community oversight. Does the panel foresee a policy shift around openness in the AI peace and security domain? So, we do have quite a few questions and remarks and also reflections. So, we had a question surrounding AI authenticity and the implications of not knowing what is generated or not, and the destabilizing effects. We had a question from Bagros on the commander and perhaps the human-machine interaction in the battlefield, and perhaps also how do we make sure that the use of AI remains indeed responsible in the hands of the commander, particularly under situations of pressure, such as in the battlefield. We had a question of who should be responsible for the mitigation of risks of AI, particularly in the light of ongoing conflicts today and the… implications of civilians. And finally, we have a question on openness in the AI peace and security domains. So perhaps may I ask, in the interest of time, Jingjie to start us off with three to four minutes. Please feel free to answer any questions you may think or any other element that you would like to add based on what you’ve heard today. Jingjie, please, you have the floor. Thank you for your questions. So first, the question is from Bagus. I think the first thing we need to do is knowledge sharing, because I assume that in military, when you deploy an AI


Jingjie He: system, you developed it first as a project and you deploy it. So many times, based on my experience from the civilian field and industries, many times the one who makes the decision whether to use, deploy or complete the project may not always be the one who understands the technologies. So knowledge sharing is very important. Transparency is important. Those people who make the decisions need to understand the technology perspective. And also, the second point I want to make is the importance of incentives. It is very important for the militaries to understand that AI is not only a force multiplier, but also a threat multiplier. It is not only about the risk of civilians. It’s also about increasing risk of your own combatants when you have a poorly designed, unverified AI system with uncertainties and you cannot be confident about it and there’s a whole black box. So this kind of incentive is very important. With this understanding, I believe many militaries will be more incentivized to improve their systems? A quick answer for the second question, who’s responsible for AI governance? I think everyone. Like, I’m sure Michael will talk more about from the industry point of view, but I do sense that everyone is responsible for, you know, raising a voice, being sensitive about the importance of AI governance, you know, incentivize or promoting a dialogue about AI risks. And the third question about AI openness, I’m actually not sure what is the AI openness, because if you’re talking about openness about the algorithm, I think it’s very difficult because when we’re in the industry, where we go for due diligence or technological scouting, we ask the company, what is it? What’s your core technology? They told us, they were likely to tell us it’s their own IP and they will not be able to reveal it. But look, we have a good system and it works perfectly, just believing our results. This is what happens. So if you’re talking about openness, about AI algorithm, I have a huge question mark about the feasibilities and possibility of this kind of solutions. Thank you.


Yasmin Afina: Thank you, Jingjie He, for your very sharp response and also the fact that you’re actually joining us from very far and it’s very late at home. So thank you very much for this. Michael, would you like to intervene now? Thank you, Yasmin. Happy to do so. A shared flag, Zoom keeps telling me that my internet connection is unstable. So if I pause at any moment, that’ll be the reason why.


Michael Karimian: In answer to the questions, Frances’s question on AI signature, I appreciate the question. I think one way of thinking about this is, are there specific use cases? where we really need AI signatures or would we be comfortable with other use cases where we don’t need them? I suspect that’s possibly the direction we will go in but of course the proliferation of AI solutions means that there’ll always be solutions or actors who would circumvent that anyway but that doesn’t downplay the importance of having AI signatures in the first place. To Bagus’ question on emergent misalignment and AI-supported DSS, I think this Bagus, your question really points to something which we’ve certainly discussed in the context of the roundtable for AI security and ethics and that’s the challenge which exists at the moment in having access to meaningful and trustworthy use cases to understand in very effective ways actually how AI is being used. I think the academic community, civil society, industry, governments actually at the moment we are relying on a number of examples which you know partly come from here say or just perhaps aren’t that reflective of how AI is being used in security domains but I’m hopeful that as AI is further adopted in various security domains, transparency around use cases will improve and then we’ll be better able to understand implications of them. To the question from a colleague from Egypt slash the University of Durham, change is right, who has responsibility? Everyone. From a human rights perspective, of course states have the state duty to protect, respect and fulfil human rights. Industry has a corporate responsibility to respect human rights and individuals have a right to remedy when their rights have been harmed. Focusing just specifically on the role of industry, what that means is that all companies under the UN guiding principles on business human rights have a responsibility to ensure that their products and services are not being used in ways to facilitate or contribute to serious human rights abuses and this means that any engagement with a government, ministry of defence, armed forces, especially in the context of ongoing armed conflict or where there are credible allegations of international law violations, must be subject to rigorous due diligence, clear red lines on misuse and where risks cannot be mitigated there should be a refusal to provide or maintain support. And actually that’s not new, this has been an established position for a number of years now, but of course matters there is implementation. And lastly to Rowan’s question, yes I would hope so, that we will see more openness and actually one example of that is the RE-AIM process which I mentioned earlier, responsible AI in the military domain process, last year hosted in South Korea and in the next 6-12 months will be hosted in Spain and so anyone in the audience that are stakeholders who are interested in this should certainly keep an eye on the RE-AIM process. Thank you very much Michael for this and the importance of differentiating of course principles but also actual implementation and the importance as well of human rights and providing a framework to ensure that everyone is indeed held accountable but also to ensure that civilians have a right to remedy including in the context of AI for international peace and security. Finally Alexi would you like to have any concluding remarks and responses to any of the questions raised?


Alexi Drew: Thank you, I’ll run through these nice and quickly in the interest of giving people their time. I’d like to start with the silver lining to the signatures and the kind of demarking in authentic content or AI generated content from human generated content. As someone who used to work in arms control there’s a great thing to kind of bear in note that every time a new threat arises or a new innovation creates a threat it’s very quick for a counter to be developed against it and that is just as true in the identification of inauthentically generated or machine generated content as it is with any other. risk of this type that we’ve seen before. So I’m encouraged to see that it’s not just the CAI that exists in this space. There are a number of initiatives coming with technical and non-technical means to kind of give us the means of, as Michael says, in critical circumstances, being able to identify when content has been generated as opposed to when it has been created and is authentic. On the question from Vagos’ reference, compliance and command as component, this is actually part of what I was referencing when I was talking about the need to ensure that governance, both ethical, legal and economic, is built in at every stage of the design of life cycle. If we take IHL as part of that governance, a system should be designed, trained, tested, authenticated and verified with the data selected with its need to be compliant with IHL in mind. If it isn’t, that’s when you’re introducing the risks that something could be designed which is either completely incompatible with IHL or is open to being – or is possible for it to be used in a manner which is non-compliant. If you actually treat the life cycle effectively and how you incorporate IHL across it rather than just in a section of it or in, say, the assurance stage or treat it as a checkbox exercise, then you can actually constrain the risks of that going wrong. That being said, there are other components to that. The fact that any user of a system should be trained to understand what it can and what it cannot do, what does it look like when it fails, what are the circumstances which have led to its failure in testing, what influences its level of accuracy so they can make informed decisions as to how much and whether, in fact, to trust an AI-based tool system or weapon system, be it decision-making system, strategic or tactical, or be it a direct weapon system. And it should also be that in some cases, these tools simply aren’t used because it’s understood because of how These systems have been designed with IHL baked into each part of it, but in the context you’re seeking to apply how they simply cannot be compliant with IHL. On the subject of trust around these tools and how particularly LLMs, some of them have been found to be very non-critical of their human users and how that might influence it. Yes, that is a problem. They’re not designed to be critical and to push back on their human users. They’re designed to be supportive administrative assistants that say yes a lot and that should be something which is understood as a potential failing and implications to how a military should design, create doctrine and then deploy a tool. Moving quickly on to the who owns it, where’s the responsibility. I agree with both previous speakers, Jingjie and Michael. Everyone owns the responsibility here, but there are levers despite the complex ownership structures between the private sector, the public sector, the global north and south and those with less seeming control that everyone has a lever they can use here. Be it the taking part in globalized standard setting organizations, technical or non-technical, or be it in procurement strategies and procurement standards. If governance is critical, IHL, ethical, social and economic, then it should be a condition of procurement from government to suppliers. So then even if they don’t own the system or the services required to operate the system, say it’s AI as a service, the thing has been designed to set these standards and it’s legally necessary to do so to meet the procurement standards. Finally, on a point on openness, I’m going to try and be positive with a bit of negativity here. I think we’re at a point where innovation is being posed as a solution to our increasing state of insecurity and a risk to peace. And it’s been posited as a zero-sum game between innovation and security or insecurity and constraint on innovation. That is not the case. You can in fact have security and innovation with adherence to values.


Yasmin Afina: and Alexu. Thank you very much indeed Alexu for ending us on a positive note but also for you know for Jingjie He to also add a point on the fact that Chinese social media also had signatures to AI generated content and I think that also adds to the importance of collective responsibility to ensure responsible AI and international peace and security and I do know the importance of incentivization noted by Jingjie He, I know the importance of human rights as a framework, compliance with IHL and yeah on that hopefully positive note that we end we’re ending this workshop. Thank you very much everyone for joining us today either online or in person. Please join me in giving a round of applause to our speakers online. Thank you very much. Thank you very much. . . .


J

Jingjie He

Speech speed

121 words per minute

Speech length

833 words

Speech time

412 seconds

Inclusive engagement across stakeholders is essential for effective global AI governance because technological challenges require interdisciplinary approaches

Explanation

Jingjie He argues that while technological challenges can often be addressed through technological solutions, identifying the true nature of AI challenges requires an interdisciplinary and multi-stakeholder approach. This inclusive approach ensures that a wide range of knowledge, expertise, and perspectives are taken into account in shaping responsible AI policies.


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Michael Karimian
– Yasmin Afina

Agreed on

Multi-stakeholder engagement is essential for effective AI governance


UN-sponsored platforms provide neutral, depoliticized spaces for knowledge-sharing beyond geopolitical constraints

Explanation

She emphasizes that UN-sponsored platforms like UNIDIR’s RAISE and IGF play a critical role in enabling multi-stakeholder engagement. What sets them apart from state-centric mechanisms is their unique ability to provide neutral, depoliticized, and inclusive spaces where knowledge-sharing and confidence-building can take place beyond geopolitical tensions.


Evidence

References to UNIDIR’s RAISE platform, IGF, and Global Digital Compact as examples of such platforms


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Everyone has responsibility for AI governance and raising awareness about AI risks

Explanation

When asked who is responsible for AI governance, Jingjie He responds that everyone has a role to play. She emphasizes the importance of raising voices, being sensitive about AI governance importance, and promoting dialogue about AI risks.


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Michael Karimian
– Alexi Drew

Agreed on

Universal responsibility for AI governance


AI enhances satellite imagery analysis for conflict monitoring, as demonstrated by Amnesty International’s Darfur project

Explanation

Jingjie He provides a concrete example of how AI can foster international peace and security through satellite remote sensing. She explains that AI and machine learning are being applied to enhance analytical efficiency of satellite imagery for monitoring conflicts.


Evidence

Amnesty International’s collaboration with Element AI and 29,000 volunteers to develop tools for automatically analyzing satellite imagery for monitoring conflicts in Darfur


Major discussion point

AI Applications for Peace and Security


Topics

Cybersecurity | Human rights principles


AI can empower international peace, security, and non-proliferation missions through improved analytical capabilities

Explanation

She argues that AI applications in satellite imagery analysis represent just one example of many ways AI can benefit international peace, security, and non-proliferation missions. However, she also acknowledges the challenges that come with these applications.


Evidence

References her previous research showing challenges of adversarial attacks in such systems


Major discussion point

AI Applications for Peace and Security


Topics

Cybersecurity | Human rights principles


Knowledge sharing between technology developers and decision-makers is crucial in military contexts

Explanation

Jingjie He emphasizes that in military deployments of AI systems, the people making decisions about deployment may not always be those who understand the technology. She stresses the importance of transparency and knowledge sharing so decision-makers can understand the technology perspective.


Evidence

References her experience from civilian field and industries where decision-makers often don’t understand the technologies they’re deploying


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Legal and regulatory


AI serves as both force multiplier and threat multiplier, increasing risks for combatants with poorly designed systems

Explanation

She argues that militaries need to understand that AI is not only a force multiplier but also a threat multiplier. Poorly designed, unverified AI systems with uncertainties create risks not just for civilians but also for the military’s own combatants when they cannot be confident about the system’s performance.


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Legal and regulatory


Algorithm openness faces feasibility challenges due to intellectual property concerns

Explanation

When discussing AI openness, Jingjie He expresses skepticism about the feasibility of algorithm transparency. She explains that in industry due diligence, companies typically claim their core technology as intellectual property and refuse to reveal algorithms, instead asking clients to trust their results.


Evidence

Her experience in industry technological scouting where companies refuse to reveal their core algorithms, claiming them as IP


Major discussion point

Technical Challenges and Risks


Topics

Legal and regulatory | Intellectual property rights


Disagreed with

– Michael Karimian

Disagreed on

Feasibility of AI algorithm transparency and openness


Adversarial attacks make AI systems more vulnerable and discussions more challenging

Explanation

She acknowledges that there are potential challenges with AI applications in peace and security contexts, specifically mentioning adversarial attacks as a vulnerability that makes AI systems more susceptible to manipulation and makes governance discussions more complex.


Evidence

References her previous research on adversarial attacks in AI systems


Major discussion point

Technical Challenges and Risks


Topics

Cybersecurity | Network security


M

Michael Karimian

Speech speed

149 words per minute

Speech length

1198 words

Speech time

480 seconds

Industry has critical role as developers and deployers, plus proactive stakeholders in establishing norms and safeguards

Explanation

Michael Karimian argues that industry has a critical role not just as developers and deployers of AI technology, but also as proactive stakeholders in establishing norms, standards, and safeguards to mitigate risks associated with AI in security contexts. He emphasizes that industry’s practical contributions to governance cannot be overstated.


Evidence

References the roundtable for AI security and ethics (RAISE) which has highlighted industry’s practical contributions to governance


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Jingjie He
– Yasmin Afina

Agreed on

Multi-stakeholder engagement is essential for effective AI governance


Industry actors are first to encounter AI risks due to direct involvement in development and deployment

Explanation

He argues that industry actors are often the first to encounter and understand AI risks and vulnerabilities because of their direct involvement in developing and deploying these technologies. This puts industry players in a unique position to provide expertise on technical feasibility, operational impacts, and risk mitigation strategies.


Major discussion point

Industry Responsibility and Due Diligence


Topics

Legal and regulatory | Human rights principles


Industry must develop clear standards ensuring AI systems are transparent with accountability mechanisms

Explanation

Karimian emphasizes that industry must develop and adhere to clear standards that ensure AI systems used in security applications are transparent in their capabilities and limitations, with clearly articulated accountability mechanisms. This involves robust documentation practices, continuous monitoring, and the capability to audit AI systems.


Major discussion point

Industry Responsibility and Due Diligence


Topics

Legal and regulatory | Human rights principles


Agreed with

– Alexi Drew

Agreed on

Lifecycle approach is crucial for AI governance


Disagreed with

– Jingjie He

Disagreed on

Feasibility of AI algorithm transparency and openness


Companies have responsibility under UN guiding principles to ensure products aren’t used for human rights abuses

Explanation

He explains that under the UN guiding principles on business and human rights, all companies have a responsibility to ensure their products and services are not used to facilitate or contribute to serious human rights abuses. This means engagement with governments or armed forces, especially in conflict contexts, must be subject to rigorous due diligence and clear red lines on misuse.


Evidence

References UN guiding principles on business and human rights as established framework


Major discussion point

Industry Responsibility and Due Diligence


Topics

Human rights principles | Legal and regulatory


Agreed with

– Jingjie He
– Alexi Drew

Agreed on

Universal responsibility for AI governance


AI signatures may be needed for specific critical use cases rather than universal application

Explanation

In response to questions about AI content signatures, Karimian suggests thinking about whether there are specific use cases where AI signatures are really needed versus other use cases where they might not be necessary. He acknowledges that the proliferation of AI solutions means there will always be actors who would circumvent such measures.


Major discussion point

Content Authenticity and Misinformation


Topics

Legal and regulatory | Content policy


Agreed with

– Alexi Drew
– Francis Alaneme (Audience)

Agreed on

Need for technical solutions to AI content authenticity challenges


A

Alexi Drew

Speech speed

182 words per minute

Speech length

2065 words

Speech time

680 seconds

All stakeholders have levers they can use, including participation in standard-setting organizations and procurement strategies

Explanation

Alexi Drew argues that despite complex ownership structures between private and public sectors and between global north and south, everyone has levers they can use. These include participating in globalized standard-setting organizations and using procurement strategies as governance tools.


Evidence

Suggests that if governance is critical, it should be a condition of procurement from government to suppliers


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Jingjie He
– Michael Karimian

Agreed on

Universal responsibility for AI governance


Governance cannot be added as afterthought but must be designed to fit each stage of the lifecycle

Explanation

Drew emphasizes that governance is not something that can be added after the fact as an afterthought. Instead, it needs to be something designed to fit into each stage of the AI system lifecycle, from development through validation to deployment.


Major discussion point

Lifecycle Management Approach


Topics

Legal and regulatory | Human rights principles


Agreed with

– Michael Karimian

Agreed on

Lifecycle approach is crucial for AI governance


Development, validation, and deployment stages each present unique risks that compound if not properly addressed

Explanation

She provides detailed examples of how localization issues can create compounding problems across the AI lifecycle. For instance, using Global North data to train models for Global South deployment, testing outside local contexts, and deploying systems that systematically exclude marginalized communities.


Evidence

Specific examples include refugee processing systems trained on one population failing when applied to populations with different linguistic or social characteristics, and aid algorithms that exclude marginalized communities


Major discussion point

Lifecycle Management Approach


Topics

Legal and regulatory | Human rights principles | Development


Systems should be designed, trained, and tested with compliance requirements like IHL built in from the start

Explanation

Drew argues that if International Humanitarian Law (IHL) compliance is required, AI systems should be designed, trained, tested, authenticated and verified with IHL compliance in mind from the beginning. This prevents systems from being designed that are incompatible with IHL or open to non-compliant use.


Major discussion point

Lifecycle Management Approach


Topics

Legal and regulatory | Human rights principles


Lifecycle approach prevents treating governance as checkbox exercise rather than integrated process

Explanation

She warns against treating the AI lifecycle as a conveyor belt or series of checkboxes to be completed. Instead, she advocates for understanding lifecycles as requiring checks, balances, and means of ensuring risks are not compounded throughout the process.


Major discussion point

Lifecycle Management Approach


Topics

Legal and regulatory | Human rights principles


Innovation can coexist with security and adherence to values, not a zero-sum game

Explanation

Drew concludes on a positive note, arguing against the false premise that innovation and security are in a zero-sum relationship. She contends that you can have both security and innovation while maintaining adherence to values, rejecting the notion that innovation must come at the expense of security or ethical constraints.


Major discussion point

AI Applications for Peace and Security


Topics

Legal and regulatory | Human rights principles


Military users need training to understand AI system capabilities, limitations, and failure modes

Explanation

Drew emphasizes that any user of an AI system should be trained to understand what the system can and cannot do, what failure looks like, what circumstances have led to failures in testing, and what influences accuracy levels. This enables informed decisions about how much to trust AI-based tools.


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Legal and regulatory


Counter-innovations quickly develop against new threats, including tools for identifying machine-generated content

Explanation

Drawing from her arms control background, Drew notes that every time a new threat arises or innovation creates a threat, counters are quickly developed. She applies this principle to AI-generated content, expressing encouragement that multiple initiatives exist to identify inauthentic or machine-generated content.


Evidence

References the Content Authenticity Initiative (CAI) and notes there are multiple technical and non-technical initiatives in this space


Major discussion point

Content Authenticity and Misinformation


Topics

Cybersecurity | Content policy


Agreed with

– Michael Karimian
– Francis Alaneme (Audience)

Agreed on

Need for technical solutions to AI content authenticity challenges


B

Bagus Jatmiko

Speech speed

130 words per minute

Speech length

477 words

Speech time

219 seconds

AI systems in military face risks of emergent misalignment and tendency to provide answers users want to hear

Explanation

Commander Bagus Jatmiko, working in the defense sector, raises concerns about AI being used exponentially in military contexts where commanders may be unaware that AI might be corrupted during use through emergent misalignment. He also notes the tendency of AI to be ‘psychopathic’ in providing answers that users want to seek rather than accurate assessments.


Evidence

His experience working in the defense sector and AI/information warfare research


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Legal and regulatory


Commanders may ignore humanitarian law when seeking quick AI-generated answers in fog of war

Explanation

Jatmiko expresses concern that in battlefield conditions of uncertainty and the ‘fog of war,’ commanders seeking quick answers from AI decision support systems may ignore the possibility or existence of humanitarian law. This creates risks for humanity and civilian populations.


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Human rights principles


A

Audience

Speech speed

138 words per minute

Speech length

490 words

Speech time

211 seconds

AI-generated content needs signatures for identification to prevent false information and violence instigation

Explanation

Francis Alaneme from the .ng domain registry argues that AI adoption is making imaginary things seem real, and AI-generated content should have signatures so people can easily identify what is AI-generated versus human-generated. He warns that realistic AI-generated video content can be used to pass false information and instigate violence in some places.


Evidence

Examples of video content that appears real but could be culturally inappropriate or violence-instigating


Major discussion point

Content Authenticity and Misinformation


Topics

Content policy | Cybersecurity


Agreed with

– Michael Karimian
– Alexi Drew
– Francis Alaneme (Audience)

Agreed on

Need for technical solutions to AI content authenticity challenges


Big tech companies in powerful countries hold significant control while developing countries have limited influence

Explanation

Judge George Aden Maggett from Egypt’s Supreme Court raises concerns about the power imbalance in AI development and deployment. He argues that big tech companies in powerful countries control AI development while developing countries have nothing in their hands, leading to situations where AI is used in autonomous weapons killing civilians without accountability.


Evidence

References current ongoing wars where AI is being used in autonomous weapons with civilian casualties


Major discussion point

Industry Responsibility and Due Diligence


Topics

Human rights principles | Legal and regulatory | Development


Y

Yasmin Afina

Speech speed

150 words per minute

Speech length

3381 words

Speech time

1344 seconds

Multi-stakeholder engagement is essential for AI governance to bridge divides and overcome competitiveness and distrust

Explanation

Yasmin Afina emphasizes that UNIDIR’s approach brings together experts from diverse countries including China, Russia, US, UK, but also Namibia, Ecuador, Kenya, and India to bridge divides and facilitate conversation where there is none on AI and security issues. The goal is to overcome competitiveness and distrust through inclusive dialogue.


Evidence

UNIDIR’s ARRAISE initiative bringing together experts from China, Russia, United States, United Kingdom, Namibia, Ecuador, Kenya, India


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Jingjie He
– Michael Karimian

Agreed on

Multi-stakeholder engagement is essential for effective AI governance


AI governance requires both bottom-up and top-down approaches to ensure public trust and legitimacy

Explanation

Afina argues that discussions on AI and security should not be one-way but should incorporate both bottom-up and top-down approaches. This dual approach is necessary to warrant public trust and legitimacy in AI governance processes.


Evidence

UNIDIR’s platform design for open, inclusive, and meaningful dialogue


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Cross-disciplinary literacy improvement is crucial for AI governance in security contexts

Explanation

Afina emphasizes the importance of improving cross-disciplinary literacy as part of multi-stakeholder engagement on AI and security issues. This reflects the complex nature of AI challenges that require understanding across different fields and disciplines.


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Interdisciplinary approaches


AI governance should disrupt monopolies and ensure all voices from all layers of society are heard

Explanation

Afina advocates for using platforms like ARRAISE to disrupt monopolies in the hands of the few and ensure that all voices are heard from all layers of society. This reflects a commitment to democratizing AI governance rather than leaving it to a select few powerful actors.


Evidence

ARRAISE platform design and objectives


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Voluntary funded institutes face dire fundraising situations that threaten dialogue facilitation

Explanation

Afina acknowledges the difficulty that the UN and UNIDIR face in fundraising, noting the dire situation they face today to enable such dialogue. As a voluntary funded institute, UNIDIR relies on voluntary contributions, which creates sustainability challenges for important governance initiatives.


Evidence

UNIDIR’s status as voluntary funded institute relying on voluntary contributions


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Development


AI’s unique nature requires multi-stakeholder perspectives for understanding implications on international peace and security

Explanation

Afina argues that due to AI technology’s highly unique nature, UNIDIR quickly understood the importance of multi-stakeholder engagement and perspectives to obtain input on AI’s implications for international peace and security. This recognition led to the establishment of platforms for inclusive dialogue.


Evidence

UNIDIR’s establishment of multi-stakeholder platforms and ARRAISE initiative


Major discussion point

AI Applications for Peace and Security


Topics

Legal and regulatory | Human rights principles


Agreements

Agreement points

Universal responsibility for AI governance

Speakers

– Jingjie He
– Michael Karimian
– Alexi Drew

Arguments

Everyone has responsibility for AI governance and raising awareness about AI risks


Companies have responsibility under UN guiding principles to ensure products aren’t used for human rights abuses


All stakeholders have levers they can use, including participation in standard-setting organizations and procurement strategies


Summary

All three main speakers agree that responsibility for AI governance is shared across all stakeholders – governments, industry, civil society, and individuals – rather than being concentrated in any single entity.


Topics

Legal and regulatory | Human rights principles


Multi-stakeholder engagement is essential for effective AI governance

Speakers

– Jingjie He
– Michael Karimian
– Yasmin Afina

Arguments

Inclusive engagement across stakeholders is essential for effective global AI governance because technological challenges require interdisciplinary approaches


Industry has critical role as developers and deployers, plus proactive stakeholders in establishing norms and safeguards


Multi-stakeholder engagement is essential for AI governance to bridge divides and overcome competitiveness and distrust


Summary

There is strong consensus that effective AI governance requires inclusive participation from diverse stakeholders, bringing together different perspectives, expertise, and capabilities to address complex technological challenges.


Topics

Legal and regulatory | Human rights principles


Lifecycle approach is crucial for AI governance

Speakers

– Michael Karimian
– Alexi Drew

Arguments

Industry must develop clear standards ensuring AI systems are transparent with accountability mechanisms


Governance cannot be added as afterthought but must be designed to fit each stage of the lifecycle


Summary

Both speakers emphasize that governance considerations must be integrated throughout the entire AI system lifecycle, from development through deployment, rather than being treated as an add-on or afterthought.


Topics

Legal and regulatory | Human rights principles


Need for technical solutions to AI content authenticity challenges

Speakers

– Michael Karimian
– Alexi Drew
– Francis Alaneme (Audience)

Arguments

AI signatures may be needed for specific critical use cases rather than universal application


Counter-innovations quickly develop against new threats, including tools for identifying machine-generated content


AI-generated content needs signatures for identification to prevent false information and violence instigation


Summary

There is agreement that technical solutions are needed to address AI-generated content authenticity, though with recognition that implementation may vary by use case and that counter-measures are rapidly developing.


Topics

Content policy | Cybersecurity


Similar viewpoints

Both speakers emphasize the critical importance of knowledge transfer and transparency between those who develop AI technologies and those who make decisions about their deployment, particularly in security contexts.

Speakers

– Jingjie He
– Michael Karimian

Arguments

Knowledge sharing between technology developers and decision-makers is crucial in military contexts


Industry actors are first to encounter AI risks due to direct involvement in development and deployment


Topics

Legal and regulatory | Cybersecurity


Both speakers highlight the critical need for military personnel to understand AI system limitations and potential failure modes to make informed decisions about trust and deployment in security contexts.

Speakers

– Alexi Drew
– Bagus Jatmiko

Arguments

Military users need training to understand AI system capabilities, limitations, and failure modes


AI systems in military face risks of emergent misalignment and tendency to provide answers users want to hear


Topics

Cybersecurity | Legal and regulatory


Both speakers maintain an optimistic view that AI can be a positive force for peace and security when properly governed, rejecting the notion that innovation must come at the expense of security or ethical considerations.

Speakers

– Jingjie He
– Alexi Drew

Arguments

AI can empower international peace, security, and non-proliferation missions through improved analytical capabilities


Innovation can coexist with security and adherence to values, not a zero-sum game


Topics

Legal and regulatory | Human rights principles


Unexpected consensus

Global South representation and power imbalances

Speakers

– Yasmin Afina
– George Aden Maggett (Audience)
– Alexi Drew

Arguments

AI governance should disrupt monopolies and ensure all voices from all layers of society are heard


Big tech companies in powerful countries hold significant control while developing countries have limited influence


All stakeholders have levers they can use, including participation in standard-setting organizations and procurement strategies


Explanation

Unexpectedly, there was strong consensus across speakers from different sectors (UN, judiciary, ICRC) about the need to address power imbalances between Global North tech companies and Global South stakeholders, with practical suggestions for how developing countries can exercise influence through procurement and standards participation.


Topics

Legal and regulatory | Human rights principles | Development


Limitations of algorithm transparency

Speakers

– Jingjie He
– Michael Karimian

Arguments

Algorithm openness faces feasibility challenges due to intellectual property concerns


AI signatures may be needed for specific critical use cases rather than universal application


Explanation

Both academic and industry perspectives unexpectedly converged on the practical limitations of full algorithmic transparency, acknowledging intellectual property constraints while still supporting targeted transparency measures for critical applications.


Topics

Legal and regulatory | Intellectual property rights


Overall assessment

Summary

The discussion revealed remarkably high consensus among speakers on fundamental principles of AI governance, including shared responsibility, multi-stakeholder engagement, lifecycle management, and the need for technical solutions to content authenticity. There was also unexpected agreement on addressing Global South representation and practical limitations of algorithmic transparency.


Consensus level

High level of consensus with significant implications for AI governance frameworks. The agreement across diverse stakeholders (academic, industry, humanitarian, military, judicial) suggests these principles have broad legitimacy and could form the foundation for effective global AI governance mechanisms. The consensus on shared responsibility and multi-stakeholder approaches particularly validates current UN and multilateral efforts in this space.


Differences

Different viewpoints

Feasibility of AI algorithm transparency and openness

Speakers

– Jingjie He
– Michael Karimian

Arguments

Algorithm openness faces feasibility challenges due to intellectual property concerns


Industry must develop clear standards ensuring AI systems are transparent with accountability mechanisms


Summary

Jingjie He expresses strong skepticism about algorithm transparency due to IP concerns and industry practices of protecting core technology, while Michael Karimian advocates for transparency standards and accountability mechanisms in AI systems used in security applications.


Topics

Legal and regulatory | Intellectual property rights


Unexpected differences

Practical implementation of AI transparency in security contexts

Speakers

– Jingjie He
– Michael Karimian

Arguments

Algorithm openness faces feasibility challenges due to intellectual property concerns


Industry must develop clear standards ensuring AI systems are transparent with accountability mechanisms


Explanation

This disagreement is unexpected because both speakers are advocates for responsible AI governance, yet they have fundamentally different views on whether transparency is achievable. Jingjie He’s practical industry experience leads her to question feasibility, while Michael Karimian’s industry perspective emphasizes the necessity and possibility of transparency standards.


Topics

Legal and regulatory | Intellectual property rights


Overall assessment

Summary

The discussion shows remarkably high consensus among speakers on fundamental principles of AI governance, with only one significant disagreement on algorithm transparency feasibility. Most differences are about emphasis and approach rather than fundamental disagreement.


Disagreement level

Low level of disagreement with high implications – the transparency debate touches on core tensions between security, commercial interests, and accountability that are central to AI governance in security contexts. The consensus on multi-stakeholder responsibility suggests strong foundation for collaborative approaches, but the transparency disagreement highlights practical implementation challenges that could impede progress.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the critical importance of knowledge transfer and transparency between those who develop AI technologies and those who make decisions about their deployment, particularly in security contexts.

Speakers

– Jingjie He
– Michael Karimian

Arguments

Knowledge sharing between technology developers and decision-makers is crucial in military contexts


Industry actors are first to encounter AI risks due to direct involvement in development and deployment


Topics

Legal and regulatory | Cybersecurity


Both speakers highlight the critical need for military personnel to understand AI system limitations and potential failure modes to make informed decisions about trust and deployment in security contexts.

Speakers

– Alexi Drew
– Bagus Jatmiko

Arguments

Military users need training to understand AI system capabilities, limitations, and failure modes


AI systems in military face risks of emergent misalignment and tendency to provide answers users want to hear


Topics

Cybersecurity | Legal and regulatory


Both speakers maintain an optimistic view that AI can be a positive force for peace and security when properly governed, rejecting the notion that innovation must come at the expense of security or ethical considerations.

Speakers

– Jingjie He
– Alexi Drew

Arguments

AI can empower international peace, security, and non-proliferation missions through improved analytical capabilities


Innovation can coexist with security and adherence to values, not a zero-sum game


Topics

Legal and regulatory | Human rights principles


Takeaways

Key takeaways

Multi-stakeholder engagement is essential for effective AI governance in security contexts, requiring inclusive participation from governments, industry, civil society, and international organizations


Industry has a critical responsibility as both developers and deployers of AI technology, with obligations under UN guiding principles to prevent human rights abuses


Lifecycle management approach is crucial – governance must be integrated at development, validation, and deployment stages rather than added as an afterthought


AI serves as both a force multiplier and threat multiplier in military contexts, requiring careful consideration of risks to both civilians and combatants


Everyone shares responsibility for AI governance, though different stakeholders have different levers of influence including procurement standards and participation in standard-setting organizations


AI has positive applications for peace and security, such as enhancing satellite imagery analysis for conflict monitoring and humanitarian purposes


Content authenticity and AI signature identification are important for preventing misinformation and violence instigation


Knowledge sharing between technology developers and decision-makers is crucial, especially in military contexts where commanders may not fully understand AI system limitations


Resolutions and action items

Continue supporting and participating in UN-sponsored platforms like UNIDIR’s RAISE and the RE-AIM process for responsible AI in military domains


Implement robust due diligence processes across the AI lifecycle from design through deployment and decommissioning


Develop clear standards ensuring AI systems used in security applications are transparent with accountability mechanisms


Provide training for military users to understand AI system capabilities, limitations, and failure modes


Integrate compliance requirements like International Humanitarian Law (IHL) into each stage of AI system development rather than treating it as a checkbox exercise


Support capacity-building initiatives particularly in regions where regulatory frameworks are still emerging


Unresolved issues

Funding sustainability for UN-sponsored AI governance platforms and multi-stakeholder initiatives


Technical feasibility of requiring algorithm openness due to intellectual property concerns


Power imbalance between big tech companies in developed countries and developing nations with limited influence over AI governance


Lack of meaningful and trustworthy use cases to understand how AI is actually being used in security domains


How to effectively implement AI signatures universally versus only for specific critical use cases


Addressing emergent misalignment and AI systems’ tendency to provide answers users want to hear rather than critical assessment


Ensuring compliance with humanitarian law in high-pressure battlefield situations where commanders seek quick AI-generated answers


Suggested compromises

Focus AI signature requirements on specific critical use cases rather than universal application across all AI-generated content


Balance innovation with security through integrated governance approaches rather than viewing them as zero-sum trade-offs


Combine technical and non-technical means for identifying machine-generated content rather than relying solely on one approach


Use procurement standards as leverage for governance compliance even when governments don’t own the AI systems or services


Develop counter-innovations and defensive measures alongside AI advancement to address emerging threats


Thought provoking comments

AI is not only a force multiplier, but also a threat multiplier. It is not only about the risk of civilians. It’s also about increasing risk of your own combatants when you have a poorly designed, unverified AI system with uncertainties and you cannot be confident about it and there’s a whole black box.

Speaker

Jingjie He


Reason

This comment reframes the AI security discussion by highlighting that AI risks aren’t just external threats to civilians, but internal risks to military forces themselves. The ‘threat multiplier’ concept introduces a crucial dual perspective that challenges the common narrative of AI as purely advantageous in military contexts.


Impact

This shifted the conversation from viewing AI governance as primarily about protecting others to recognizing it as essential for protecting one’s own forces. It provided a strategic incentive framework that could motivate military adoption of responsible AI practices based on self-interest rather than just ethical obligations.


Governance is not something that can be added on after the fact. It’s not an afterthought. It needs to be something which is designed to fit in each stage of the life cycle… we have a series of things which is simply checked off as complete without sufficient evidence to the fact without the ability to understand is this system suitable for what it’s being used for.

Speaker

Alexi Drew


Reason

This fundamentally challenges the conventional approach to AI governance by arguing against treating it as a compliance checklist. It introduces the critical insight that governance must be embedded throughout the development process, not retrofitted, and warns against the dangerous illusion of safety through checkbox exercises.


Impact

This comment elevated the technical discussion to a more sophisticated understanding of systemic governance challenges. It influenced subsequent speakers to address implementation gaps and moved the conversation from ‘what should be done’ to ‘how governance actually fails in practice’ and why current approaches are insufficient.


Who is responsible for the mitigation of AI risks? Is it high tech big companies who are creating AI and developing AI? Because it is not in the hand of the government, especially in the developing countries right now… we can see how AI is being used in current ongoing wars. And the victims behind the use of AI technology in autonomous weapon, for example, how civilians are being killed without accountability.

Speaker

George Aden Maggett (Egyptian Supreme Court Judge)


Reason

This comment powerfully highlighted the global power imbalance in AI governance and connected abstract policy discussions to real-world consequences. Coming from a judicial perspective from the Global South, it brought urgent moral clarity about accountability gaps and the disconnect between those who develop AI and those who suffer its consequences.


Impact

This intervention fundamentally shifted the tone from technical optimization to urgent ethical accountability. It forced all subsequent speakers to address the responsibility question directly and grounded the abstract governance discussion in current conflict realities. It also highlighted the Global South perspective that had been somewhat absent from the technical discussions.


I bring concern about how AI is being used in a way that some of the commander or the user within the military domain is unaware of the possibility that AI might be corrupted during the use… And I also would like to bring the concern about the possibility of… AI being psychopath in a way that… would provide the answers that the users would like to seek. And being in the battlefield, that kind of tendency would be very, in a way, very risky and maybe dangerous.

Speaker

Commander Bagus Jatmiko (Indonesian Navy)


Reason

This comment introduced the critical concept of AI systems potentially being designed to tell users what they want to hear rather than what they need to know, especially dangerous in high-stakes military decisions. The ‘psychopath’ characterization, while provocative, highlighted how AI systems lack genuine critical thinking and may enable confirmation bias in life-or-death situations.


Impact

This shifted the discussion from technical reliability to psychological and cognitive risks in human-AI interaction. It introduced the concept of AI as potentially manipulative rather than just unreliable, adding a new dimension to the governance challenge that subsequent speakers had to address in their responses about training and system design.


You can in fact have security and innovation with adherence to values… innovation is being posed as a solution to our increasing state of insecurity and a risk to peace. And it’s been posited as a zero-sum game between innovation and security or insecurity and constraint on innovation. That is not the case.

Speaker

Alexi Drew


Reason

This comment directly challenged the false dichotomy often presented in AI policy discussions – that we must choose between innovation and safety/ethics. It reframed the entire governance challenge as a design problem rather than a trade-off, suggesting that responsible development is not inherently constraining but rather a different approach to innovation.


Impact

This provided a positive, solution-oriented conclusion that synthesized the various concerns raised throughout the discussion. It shifted the final tone from problem-focused to possibility-focused, suggesting that the governance challenges discussed were solvable through better design rather than fundamental limitations on AI development.


Overall assessment

These key comments transformed what could have been a technical policy discussion into a nuanced exploration of power, accountability, and practical implementation challenges. The progression moved from technical considerations (lifecycle management, signatures) to strategic reframing (threat multiplier concept), to urgent moral questions (Global South accountability concerns), to psychological risks (AI manipulation), and finally to a synthesis that rejected false trade-offs. The most impactful comments came from practitioners with direct experience (military officer, judge) who grounded abstract governance concepts in real-world consequences. This created a discussion that was both technically informed and ethically urgent, with each major intervention building complexity and shifting the conversation toward more fundamental questions about power, responsibility, and the human costs of AI deployment in security contexts.


Follow-up questions

How can we make multi-stakeholder AI governance platforms like RAISE more sustainable and address funding challenges?

Speaker

Jingjie He


Explanation

She noted that platforms like RAISE face funding issues and sustainability concerns, emphasizing this should be a collective effort requiring more resources and contributions from all stakeholders.


How can we better address adversarial attacks on AI systems used for peace and security monitoring?

Speaker

Jingjie He


Explanation

She mentioned that adversarial attacks pose challenges to AI systems used in satellite imagery analysis for conflict monitoring, making discussions more complex and requiring further research.


What specific technical standards and accountability mechanisms should be developed for AI systems in security applications?

Speaker

Michael Karimian


Explanation

He emphasized the need for clear standards ensuring transparency in AI capabilities and limitations, with robust documentation, monitoring, and auditing capabilities.


How can we develop more effective technical, ethical, and humanitarian governance that intersects with all stages of the AI lifecycle?

Speaker

Alexi Drew


Explanation

She stressed the need for governance mechanisms that work across development, validation, and deployment stages rather than being added as an afterthought.


How can AI-generated content be reliably identified and distinguished from human-generated content to prevent misinformation and violence?

Speaker

Francis Alaneme


Explanation

He raised concerns about AI-generated video content being used to spread false information and instigate violence, emphasizing the need for signature systems to identify AI-generated content.


How can we address emergent misalignment and the risk of AI systems becoming ‘psychopathic’ in military decision-making contexts?

Speaker

Commander Bagus Jatmiko


Explanation

He expressed concern about AI systems potentially being corrupted or misaligned during use in battlefield conditions, and the tendency of AI to provide answers users want to hear rather than accurate assessments.


Who should be held responsible for mitigating AI risks, particularly when big tech companies from powerful countries control the technology while developing countries bear the consequences?

Speaker

Judge George Aden Maggett


Explanation

He raised concerns about accountability for AI-related civilian casualties in current conflicts and the power imbalance between tech companies in developed countries and affected populations in developing countries.


Will there be a policy shift toward greater AI openness in peace and security domains, similar to civilian contexts?

Speaker

Rowan Wilkinson


Explanation

The question explores whether open-source approaches and community oversight models used in civilian AI safety could be applied to AI systems used for peace and security purposes.


How can we improve access to meaningful and trustworthy use cases to better understand how AI is actually being used in security domains?

Speaker

Michael Karimian


Explanation

He noted that the academic community, civil society, industry, and governments currently rely on limited examples that may not be reflective of actual AI use in security contexts.


How can procurement standards be used as a lever to ensure AI systems comply with international humanitarian law and ethical standards?

Speaker

Alexi Drew


Explanation

She suggested that even countries without direct control over AI development could use procurement conditions to enforce governance standards, requiring further exploration of implementation mechanisms.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Day 0 Event #263 Public Service Media and Meaningful Digital Access

Day 0 Event #263 Public Service Media and Meaningful Digital Access

Session at a glance

Summary

This discussion at the Internet Governance Forum focused on the role of public service media in providing meaningful digital access, particularly in contexts where internet censorship and digital authoritarianism are growing concerns. The session was organized by the BBC and Deutsche Welle, with panelists including Patrick Leusch from Deutsche Welle, Abdallah Al Salmi from the BBC, Paula Gori from the European Digital Media Observatory, and Poncelet Ileleji from Joko Labs in Gambia.


The conversation began by distinguishing between basic internet connectivity and meaningful digital access, which encompasses reliable and affordable connectivity, appropriate devices, digital literacy, relevant local content, and safe digital environments. Leusch presented how international broadcasters face increasing censorship challenges, particularly in countries like Iran, Russia, and China, requiring sophisticated circumvention technologies to reach audiences seeking independent information during crises. Deutsche Welle and BBC invest heavily in tools like VPNs, proxy servers, and mirror sites to bypass censorship, with legal justification based on Article 19 of the UN Declaration of Human Rights regarding access to information.


Gori emphasized the connection between meaningful connectivity and disinformation, noting that public service media serve as crucial solutions to combat false information while maintaining transparency in ownership and funding. She highlighted how crisis situations demonstrate the vital role of trusted public media sources. Ileleji brought a grassroots perspective from Africa, advocating for strengthening community radio stations and local media partnerships to serve rural populations who lack broadband access but rely on radio for essential information about health, education, and agriculture.


The discussion revealed that current regulatory frameworks, including the EU’s Digital Services Act, face implementation challenges, particularly regarding data access for researchers studying platform algorithms. Participants agreed that a strengthened multi-stakeholder approach, updated international human rights frameworks, and better support for local media infrastructure are essential for achieving meaningful digital access globally.


Keypoints

## Major Discussion Points:


– **Meaningful Digital Access vs. Basic Connectivity**: The distinction between simply having internet access and having meaningful digital access, which includes reliable/affordable connectivity, appropriate devices, digital literacy, relevant local content, and safe digital environments. This concept goes beyond just being connected to focus on the quality and utility of the internet experience.


– **Internet Censorship and Circumvention Technologies**: How authoritarian governments are increasingly blocking access to independent media content, and the technical and ethical challenges faced by public service broadcasters like BBC and Deutsche Welle in developing circumvention tools (VPNs, proxies, mirror servers) to reach audiences in countries like Iran, Russia, and China.


– **Community-Level Media Infrastructure**: The critical role of community radio stations, particularly in rural Africa, as intermediaries for delivering reliable information to populations with limited broadband access. The need to strengthen these local media outlets through partnerships with international broadcasters and digital literacy training.


– **Platform Transparency and Algorithmic Accountability**: The challenges researchers and media organizations face in understanding how social media algorithms work, the lack of data access despite regulations like the EU’s Digital Services Act, and how algorithmic preferences for emotional/sensational content can amplify disinformation.


– **Regulatory and Policy Framework Gaps**: The need to update international frameworks like Article 19 of the UN Declaration of Human Rights, strengthen multi-stakeholder governance models, and implement existing policies like the Global Digital Compact to better protect internet freedom and access to information.


## Overall Purpose:


The discussion aimed to explore how public service media can enhance meaningful digital access globally, examining both the technical challenges of reaching audiences under authoritarian censorship and the broader policy frameworks needed to ensure equitable, safe, and useful internet access for all populations.


## Overall Tone:


The discussion maintained a professional, collaborative tone throughout, with participants sharing expertise and building on each other’s points constructively. While addressing serious challenges like censorship and disinformation, the speakers remained solution-oriented and emphasized the importance of multi-stakeholder cooperation. The tone was urgent but not alarmist, reflecting both the gravity of digital rights issues and optimism about potential solutions through coordinated action.


Speakers

– **Mr. Patrick Leusch** – Head of European Affairs at Deutsche Welle (Germany’s international broadcaster), Session Moderator


– **MODERATOR** – Online moderator (Oliver Ings, Distribution Manager at BBC)


– **Audience** – Various audience members and participants


– **Mr. Poncelet Ileleji** – CEO of Joko Labs in Banjul, Gambia; ICT expert with extensive experience in ICT development


– **Giacomo Mazzone** – Representative from Eurovision


– **Mr. Abdallah Alsalmi** – Policy Advisor at the BBC, Session Co-organizer


– **Ms. Paula Gori** – Secretary General and Coordinator of the European Digital Media Observatory (EDMO)


**Additional speakers:**


– **Thora** – PhD researcher from Iceland studying how very large platforms (VLOPS and VLOCE) are undermining democracy in the EEA


Full session report

# Public Service Media and Meaningful Digital Access: IGF Session Report


## Executive Summary


This Internet Governance Forum session, organized by the BBC and Deutsche Welle, examined how public service media can provide meaningful digital access in an era of increasing internet censorship. Moderated by Patrick Leusch from Deutsche Welle, the discussion featured Mr. Abdallah Alsalmi from the BBC (participating remotely from London due to flight cancellations), Mr. Poncelet Ileleji from Joko Labs in Gambia, Ms. Paula Gori from the European Digital Media Observatory, Giacomo Mazzone from Eurovision, and Thora, a PhD researcher studying platform impacts on democracy.


The session explored the distinction between basic connectivity and meaningful digital access, examining technical circumvention strategies, community media infrastructure, and platform governance challenges. Participants revealed significant disagreements on content regulation approaches while finding common ground on the importance of multi-stakeholder governance and public service media’s crisis response role.


## Defining Meaningful Digital Access


Mr. Alsalmi opened by distinguishing meaningful digital access from simple connectivity: “We need to go beyond just simple connectivity and beyond just having a device that is connected to the internet because it’s all about the experience, it’s all about what the internet users can make of the internet.”


He clarified that while the UN’s Universal Meaningful Connectivity (UMC) provides specific development metrics, meaningful digital access focuses on the qualitative user experience and practical utility of internet services. This encompasses reliable connectivity, appropriate devices, digital literacy, relevant local content, and secure digital environments.


## Circumvention Technologies and Legal Framework


### Deutsche Welle’s Technical Approach


Patrick Leusch detailed Deutsche Welle’s substantial investment in circumvention technologies to reach audiences in countries like Iran, Russia, and China. The broadcaster employs VPN services, proxy servers, mirror websites, and tools like Psiphon and Tor. He highlighted their collaboration with Italian organization UNI to develop the News Media Scan tool.


Leusch provided specific examples of usage spikes during crises, including the Prigozhin coup attempt and Navalny’s death, demonstrating increased demand for alternative information sources during political upheavals. The technical work requires permanent adaptation as censorship methods vary significantly between countries and evolve continuously.


### Legal Justification and Challenges


Deutsche Welle’s legal team, consulting with the German Bundestag Legal Service, established justification for circumvention tools under Article 19 of the UN Declaration of Human Rights. However, Alsalmi argued for updating this framework: “Article 19 is really, I would say, is really outdated and we need to have another look at it, update it, and renew commitments to it… Any government can shut down the internet at any time without due recourse to legal background or text.”


## Community-Level Infrastructure Perspective


### African Connectivity Challenges


Mr. Ileleji provided grassroots perspective from sub-Saharan Africa, where approximately 37% of the population has broadband connectivity according to ITU statistics. He emphasized community radio’s continued importance: “I like to look at it from a grassroots level. What information do community radios are able to provide to their citizens?”


He described how community radio stations serve as intermediaries for rural populations, providing information about health, education, and agriculture while combating fake news spreading through WhatsApp. Ileleji noted that major tech companies like Meta and Google have launched connectivity projects in Africa using balloons and drones, but these often provide access to only limited websites.


### Partnership Approach


Rather than waiting for comprehensive broadband deployment, Ileleji advocated strengthening partnerships between international broadcasters and community radio stations, emphasizing digital literacy tools and training. This approach recognizes existing technological and economic constraints while building on established community media infrastructure.


## Platform Governance and Data Access


### Research Challenges


Ms. Gori highlighted obstacles in understanding platform operations despite regulations like the EU’s Digital Services Act (DSA). She expressed particular concern about AI systems: “We even don’t know the answers that Gen AI is giving to people. So whenever you ask an AI chatbot about something, it is giving you an answer and no one knows it. It is like between you and the chatbot, which is creating an additional element, probably even more scary.”


The DSA’s data access provisions remain hindered by the unpublished Delegated Act, preventing researchers from accessing platform data necessary for studying algorithmic behavior and disinformation patterns. Gori worried about creating a “two-speed system” where only well-funded institutions can analyze platform data.


### Regulatory Approaches


Gori used a highway metaphor to explain connectivity and content regulation, advocating for risk-based regulation targeting platform operations rather than content itself. She noted increased reliance on trusted sources during COVID-19, highlighting public service media’s stabilizing role during information uncertainty.


## Content Regulation Debate


A significant disagreement emerged on content regulation approaches. Mr. Ileleji took a firm stance: “We shouldn’t have regulation on content. It goes against freedom of speech. So immediately you start trying to regulate content, then you are infringing on the rights of people.”


In contrast, Giacomo Mazzone suggested that fact-checking alone proves insufficient, questioning the effectiveness of industry pledges by organizations like the European Broadcasting Union and newspaper associations to platforms. This disagreement reflected broader tensions between combating disinformation and preserving freedom of expression.


## Multi-Stakeholder Governance


### Strengthening International Cooperation


Participants agreed on strengthening multi-stakeholder governance models. Alsalmi advocated for re-energizing local Internet Governance Forum (IGF) forums and coalition building to prevent internet fragmentation. Ileleji proposed combining Global Digital Compact implementation with World Summit on the Information Society (WSIS) review to strengthen the IGF framework.


Gori emphasized involving municipalities as key players closer to citizens in digital rights advocacy. Alsalmi stressed the importance of civil society working at local levels and engaging judicial systems when governments don’t support digital rights initiatives.


### Research and Democracy Focus


Thora, referencing Time Magazine’s “Person of the Year 2006” recognition of internet users, focused her research on how Very Large Online Platforms (VLOPs) and Very Large Online Content Engines (VLOCEs) undermine democracy in the European Economic Area. Her work examines the intersection of platform governance and democratic processes.


## Key Outcomes and Ongoing Challenges


The session revealed both consensus and significant disagreements among participants. While there was agreement on the importance of multi-stakeholder governance, opposition to internet shutdowns, and public service media’s crisis response role, fundamental disagreements emerged on content regulation approaches.


Implementation challenges persist, including delayed DSA data access provisions, capacity gaps between large and small organizations, and the ongoing technical arms race between censorship and circumvention technologies. The discussion highlighted how different regional contexts require adapted strategies rather than universal solutions.


## Conclusion


This IGF session demonstrated the complexity of achieving meaningful digital access amid increasing digital authoritarianism. The combination of high-tech circumvention strategies from international broadcasters, community-level media strengthening in developing countries, and evolving regulatory frameworks in Europe suggests that meaningful digital access requires diverse, coordinated approaches.


The path forward involves both immediate practical actions—such as implementing existing data access provisions and strengthening community media partnerships—and longer-term framework development to update international human rights law for the digital age. Success will depend on navigating tensions between competing values while maintaining collaborative approaches to digital governance challenges.


Session transcript

Mr. Patrick Leusch: A very warm welcome everyone here in the room, in the workshop room two in Lilleström in Oslo at the IGF and remote wherever you sit around the globe. My name is Patrick Leusch, I’m head European affairs at Deutsche Welle, which is the Germany’s international broadcaster and I’m very happy to motivate this session here that has been organized, co-organized by the BBC and Deutsche Welle. So public service media and meaningful digital access, this workshop will deal with the lessons learned so far from policies followed by public service media to each audience via the internet and the challenges they face particularly in reaching global audiences. You might understand that for at least for Europe for public broadcasting internet censorship is a growing issue but not so important issue so far but potentially it is when you look at some some countries that start really limiting the access to information to put it that way but on a global scale there is a growing digital authoritarianism and this poses a challenge for information providers, for media makers, for human rights communities, we’re talking about safe space communication but we’re talking about journalism also brought to audiences to a less and less free extent. So what is the link to the concept of meaningful connectivity that will be explored in a minute and then we will step through different aspects of this challenge we are facing. We will explain a little bit practically how international broadcasters are running this problem and on the other hand at the second part let’s say what’s important is to understand what is the policy implications and is the regulatory implications and where this meaningful digital access needs to be strengthened from a policy and a regulatory and a legal framework and there is room to improve by far obviously and that is what we will discuss with the following protagonists and you because we consider you as an expert community, be you online or be you in the room so you will have space to discuss among yourselves and with us obviously. So the panelists so far are first of all I mentioned Abdallah Al Salmi, he’s policy advisor at the BBC and he was supposed to sit next to me because he’s the real organizer of that session but his flight was first delayed and then cancelled so no chance to come over from London. Hello Abdallah in London, very warm welcome. So I’m turning to the second speaker on screen, Paula Gori, who is the secretary general and the coordinator of the European Digital Media Observatory ADMO. Hello Paola, thank you very much for joining. Thank you. And last but not least with me is Poncelet Ileleji, he is the CEO of Joko Labs in Banjul, Gambia and he’s an outstanding ICT expert. When you look at his LinkedIn track down, he has been a member of a lot of boards and experts groups that deal particularly with ICT in development. Thank you very much for making the way, Poncelet. And last but not least, we have Oliver Ings, he is distribution manager at the BBC and he is the online moderator. I hope he’s there and we will get the questions from the audiences via Oliver. Now let’s simply start. Abdallah, give us an overview, what are we talking about when we talk about meaningful connectivity or meaningful digital access? Are you going to share your screen with your presentation by yourself or you want me to do that? So I’m sharing my screen now and I wanted


Mr. Abdallah Alsalmi: to ask if you could see it. Should come. So far we see you. So the IGF is telling me that they can’t see it. So I cannot see it on screen. Now we see it. Okay, perfect. All right, I’ll get started and thank you, Patrick, for the introduction and again I’m very happy to be here. So I’m going to start with the thank you, Patrick, for the introduction and again apologies for not being able to be physically in workshop room two. I’m going to arrive later tonight so hopefully we will meet some of you over the week. So to begin to talk about meaningful digital access, it’s really a good start to think of how technology and communications evangelists tended to lump some old internet users in one group. So for example, someone who can send only text messages on WhatsApp over a 2G connection is put or placed in the same group with someone who has a super fast broadband and can use Apple’s latest VR headset to play games. So over the last few years, some civil society groups such as the Alliance for Affordable Internet came up with this concept of meaningful digital access and the idea behind it was that we need to go beyond just simple connectivity and beyond just having a device that is connected to the internet because it’s all about the experience, it’s all about what the internet users can make of the internet. And the meaningful digital access has a number of elements. The definition is not really set in stone so it’s a bit flexible at the moment but the first one is about reliability and affordability of the connectivity. Again, here we’re talking about the costs of data that vary largely between one country and another. Probably it’s getting cheaper but still in some geographic contexts it’s a prohibitive aspect of using the internet. The second element is the appropriate devices and the idea here is about how many devices does a person have and do they have a keyboard in their device or not when they are using the internet. So the more devices they have, the better specifications of the devices, their internet experience is going to improve. And number three which touches on the issue of the digital divide and issues of development which is very important is digital literacy and skills and again the UN and a large number of other organizations have been working on this and it’s a huge subject. It also touches upon one of the points that some of our panelists will speak about today which is disinformation. To what extent is the user able to enjoy their internet experience without being subjected to organized disinformation campaign either by governments, by companies or even by individuals. Fourth is the relevant content in local languages and again here I think we have made huge strides in this but again more work needs to be done in making online content available in languages where people find it easy to speak and to use the internet. And last but not least number five is the safe and inclusive digital environments and this is here we get into the area of cyber security, we get into the area of access and continuity of internet access. Is there an internet shutdown? Is there censorship and blocking? And all of these elements come under as one of the requirements for a meaningful digital access. Now the UN has their own standard which is called universal meaningful connectivity and largely it’s very similar to meaningful digital access but there are differences. So a universal meaningful connectivity or in short UMC is more of a development goal that some UN organizations such as the ITU, the International Telecommunications Union works on in cooperation with governments, in cooperation with civil society and the idea is to upgrade the experience of online access based on specific metrics that have to do with how many people are connected to the internet and to what extent they are able to use data on a day-to-day basis, and what purposes they use the internet for, is it for business, is it for social networking or for looking for a job. The aim of the universal meaningful connectivity is the same as the meaningful digital access in the sense that If people don’t have access to a good connection, they can’t look for a job, they can’t keep in touch with their family, they can’t express their opinions freely on the Internet. However, the only difference here is that the MDI is an outcome. It really relates to the experience and the quality of it, of using the Internet, while UMC is a goal in itself and policy. And so the UMC as a metric, there’s a lot of data that’s available already. If you go to the ITU data hub online, you will find really a good dashboard that shows you the scores of all the countries in the world which are members of the ITU and where you can really see where more work needs to be done. For example, some of these scores, if it’s up to 100, if it’s 40 to 50, the UMC metric is limited. If it goes all the way up to 95 and 200, then it means that the target has been achieved. And yeah, so the last slide is about really this session, and we’re trying to look here at how public service media work to enhance and respond to these various challenges in attaining the status of meaningful digital access. So I’m going to stop here and get back to you, Patrick.


Mr. Patrick Leusch: Thank you very much, Abdallah, for that first introduction to camp a little bit this scene. I think it’s very important to distinguish between an outcome and an objective. And I think we will come back to both of it, because we would like to look at it from a comprehensive point of view. You have mentioned different use cases which play with meaningful digital access, let’s say in the exchange between people looking for a job, for instance, inform themselves. From another perspective, it’s also from the sender side. You can say that there is one issue which is really a bigger issue, is the access to information on a global scale, which is limiting more and more. And I would like to jump in and show you now a little bit what does that mean to public service media like the BBC, us and others, and if Laura could launch my presentation, that would be great. So thank you very much. So as I said, we are an international broadcast. I think you know roughly what we do. The map of press freedom that you see here is the guiding line for what we do. We provide unbiased information for free minds, and in a similar logic BBC and the former USAGM, at least with some of their grantees, have been underway to provide independent information, reliable information, where there is limitations to that, particularly from local media or state media or whatever. So as BBC and others, we provide this information in these local languages and made by teams from these countries. So we are not reporting about Germany to Gambia or Senegal. We are informing people in Russia about what’s happening in Russia and in Ukraine, right? And obviously, there is interest on the global scale. We reach 320 million users a week. And when you look at the geographical dispatching, then you see also that most of them are reached on continents where there is a, let’s say, limited space of information, be it by technical means, be it by market means, or be it by digital order. We will have a closer look at Eastern Europe and Central Asia, because there is where the game plays at the moment when you look at censorship. Independent content that would otherwise be denied through censorship, disinformation and one-sided reporting, that’s what our final impact should be. Now, on one hand, we are all journalists. We are used to create great journalistic content. But what are we telling taxpayers when they pay for this great content, when we cannot get it through to the audiences? So since many years, we and others invest also in censorship, understanding and teaching people circumventing censorship, because otherwise they cannot access these contents. And by the way, this research on circumvention, for instance, is nothing we do exclusively for our own company. We share with the BBC and others, for instance, and that relates also to a lot of exile media or media that are outside of the country where they report for. You can mention Medusa, for Russia and others, for instance. So that’s not only a matter for public service media like us, it’s a matter for a lot of free and independent media. The Deutsche Welle is blocked in China, in Iran, in Egypt, in Belarus, in Russia, in Turkey to some extent. And since 2012, we started looking into circumvention technologies. And as for Iran, we are very successful together with partners, obviously. And you understand that since a couple of days, there’s a complete shutdown in Iran and there are expert groups working on this issue. And Iran is a good example where over a long period of time, you really can build a skilled community that is able to access these contents via a range of tools, while the censorship is very efficient. Iranian censorship is quite efficient, let’s say. Maybe not so efficient with the Chinese one, but that’s a matter of how the Internet has been constructed. The Chinese Internet has been built as an inner Internet from the beginning, and the Iranian Internet was an open one, let’s say, but in a developing phase with limited connections to the outside global Internet at a certain point of time, where it was easier to cut it or control it. When you look at the Russian Internet, for instance, that is a different story because this was a fully-fledged, connected, globally interconnected Internet, which is now censored step by step on a testing basis also, because the Russians cannot know exactly what else works still when they cut something on another hand and they want to avoid. So it’s a kind of testing, but they are moving forward step by step, and even speed up the process to deconnect everything. So in Iran, we touch millions of people on a weekly basis, and it’s not only digital natives that can use these technologies needed to access these contents. So Internet censorship, that’s something you have to understand, is a complex issue. It’s technically and politically a complex issue. It relates on a variety of technologies, policies, means and methods, and it’s a permanent cat and mouse to understand what precisely is the technology used to censor content, filter content, block content, or throttle content, or whatever, and then the mitigation measure is also adapted to this variety of methods. These methods vary from country to country, and they vary from, let’s say, censorship policy to censorship policy. You cannot remove digital censorship from the outside of the technical center, so to say, and you cannot dig holes in the censorship wall. You can try to get around the wall. That’s something that’s very important to understand. So we are not counter-hacking or counter-attacking, but we try to provide tunnels, funnels, or whatever to make people access to Internet they are kept away from. The second condition is people in this country want access to that content. It’s their decision at the end of the day. We provide the content. We provide an explanation how to access it, but it’s their decision at the end of the day. I can tell you that that poses ethical questions on those doing this kind of stuff on one hand, and it’s also legal questions. Why a public service media like Deutsche Welle or BBC is able in front of a financial court also, and according to the law that identifies the mandate of a public broadcaster like with us, what is the legal basis on which… which we can provide explainers to audiences that explains how to access VPNs or apps that has been co-developed with IT specialists from our specialists that gives them access to that content. What is the legal basis on that? We did a research on that and we asked the German Bundestag Legal Service, Scientific Service to give an answer and the answer was it is Article 19, the access of information, Article 19 UN Carta on Human Rights. That is the legal basis and all the countries we are talking about that are censoring contents from us and others have signed this Carta and international law is breaking national law. So from a legal perspective, this is safe play. Simply on this article. So let’s start a discussion. What else? Just to speed up a little bit, we provide internet freedom via our app, Psyphone Tour. The session before here was from our friends with Tour. We work with them obviously. They host also our contents on their tour servers. When you access tour, you see our content for instance which is very important. We work with proxies, mirror servers and a lot of other means to get people access to our content. So we are quite skilled in reading what needs to be done to get people access to these contents because we do that work since 12 or 13 years. Nevertheless, it is always a kind of challenge because it is costly. You need server space for mirror server, for mirroring content for instance and you pay to Amazon or whoever for the server space. That’s really costly. But that circumvention works. Can you see from the access? This is a chart from the protests in Iran two years ago and you see clearly where the peaks are. That’s clearly every time when there was shut down, when there was protests, when there was limitations on the internet, people start seeking for information. You can see also this chart from Russia. You see that there is a peak around a weekend in June. What happened at that weekend in June? It was the coup by Prigozhin. Then there is crisis in the countries. It was the same when you look at the day that Navalny died. You see the same peak. When there is crisis, people in these countries start seeking for let’s say alternative information and that’s why public service media and exalt media are so important for these audiences. Okay, thank you very much. By the way, this is a small tool we co-developed with an Italian organisation, UNI. It’s called News Media Scan and if you install it, it shows you which websites in a given country are currently blocked, effectively blocked and which ones are free accessible. Nice monitoring tool that gives you a glimpse on what’s going on in your country. So this is to give you an overview over what we do and how that relates to the very practically to the concept of meaningful digital access. Great. I would like to hand over to Paula now to give us a glimpse on policy aspects. Hi, Paula. Yes,


Ms. Paula Gori: we can hear you. Go ahead, please. Thank you very much for having me and thank you for your great presentation. I noticed some keywords which I will try to take also in my presentation. So for those who do not know, ETMO stands for European Digital Media Observatory and we deal with disinformation and we are one of the pillars of the EU actually to tackle disinformation. But really in a nutshell, you can see us as a multi-stakeholder and multidisciplinary platform that tries to understand disinformation. Now Paula is frozen. Can you hear me? Can you see me? Yes, you are back. Okay, very good. Just very quickly, I wanted to reflect a little on the link between UMC and disinformation because first a step back, I mean Abdalla presented it very well and I was thinking of a metaphor and it’s like when you build connectivity, it’s like when you build the infrastructure which means think of for example the highway. Now we are all happy that we have a highway but if without rules it wouldn’t be so useful because actually there would be a risk of having accident or I don’t know, people walking on the highway and then having actually death accidents and so on. So there are a few rules. We are all free on the highway but there are still a few rules and this is somehow the same that happens with connectivity and content in the sense there is an infrastructure but we need not so many but at least a few rules and at least principles which are globally shared otherwise it’s hard to manage. And so this is somehow if you want my starting point. Now when it comes to disinformation and we discussed that in a prior session, I mean the whole issue is quite complex and also the solution is complex and it’s a multi-solution if you want with a full respect of fundamental rights and freedom of expression but now linking it to public service media which is if you want the core here in this session, I think there are a few reflections to be made. The first is that public service media is often seen as one of the solutions to tackle disinformation and I think this is rightly so and indeed at least in the EU there is the policy of the EU is to invest a lot and to support quality independent quality journalism and to support infrastructure for this journalism to actually be accessible. We also have to be honest, on some occasions unfortunately public service media are also sharing disinformation. We should be blind on that. There were a few occasions in some countries in which this is happening or happened and is happening but I think that once we are honest on that I think we can clearly invest also on the solution side where public service media play a key role and as you rightly said I think that crisis situations are in those moments in which we really have the evidence that they are playing a key role. I think when we saw it all during the COVID crisis we were accessing public service media more than before because we were all looking for information, we were all lost. We were also accessing quite a lot of disinformation and online content but public service media were in the end the media that everybody was relying on actually to get safe information. Now for public service media to work I think that or to be reliable I think what is very important is transparency. You may be familiar with what we have in the EU which is the European Media Freedom Act among others and according to the European Media Freedom Act there should be transparency on the ownership, on the structure, on the funding of the PSM. Why am I saying that? Because as you rightly said the choice is on the users, is on the citizens and so while public service media are not imposing them, they are just being there as an alternative or as one of the alternatives. It is important for the citizens to be sure about who is behind, how they are funded, how they are working because I think this is an element that gives reliability and that helps the users trust. Then of course as we were saying citizens can access any information they want and any source they want and this stays. I just wanted to maybe also close because I know that we are a little late in this session but I wanted to say something which is I think very important. I think there was an unfortunate, how can I say, coincidence between if you want an issue in the business model of traditional media including the PSM and in parallel as we all know a shift from advertisers from offline to online but also if you want in the way that public service media produced and shared the news. I talked to many journalists of PSM and there is a sort of mea culpa in the sense that it is important to have a more innovative but also positive approach to to sell news because otherwise there is a risk that the users and the citizens actually are not interested in quality news which is honestly a pity. So this is something where I see for example BBC and Deutsche Welle are actually quite good examples of very positive examples because they invest a lot on if you want new ways of producing content and of sharing content and also try to be less sensational. We have a lot of people who are journalists in their content and in their headlines and so on. But somehow it is important that PSM play a role also in not only being trustworthy because of the structure but also in being attractive because of their content. And I will close it here. And I just wanted to thank you really for the work that you are doing because it needs some courage to do what you are doing. And this is really in the interest of citizens.


Mr. Patrick Leusch: Thank you very much, Paula. Very interesting points. I think we will come back to one or two, particularly the building blocks you mentioned on this trustworthiness, which is extremely important when you look at content shared via the variety of distribution forms you don’t own. I’m saying the platforms, for instance, and particularly the user habits, which play an incredible role in all that. Let’s come back to that later. But first of all, I made you wait here on the screen. Poncelet, go ahead. What do you think?


Mr. Poncelet Ileleji: I think personally, good morning, everybody. When we look at public service media and meaningful digital access, I like to look at it from a grassroots level. What information do community radios are able to provide to their citizens? In most of the cases, you look at sub-Saharan Africa, for example, my beloved continent, where only about 37% of the population have broadband connectivity. In most cases, if I take the Gambia, my home, whereby you have people who live in rural areas, they have Internet connection, but they don’t have meaningful connectivity. Because sometimes most of the big telcos, what do they do? They put their towers in most of the big cities, municipal cities. So people in the villages and in rural areas in most parts of Africa, they get their information from all these community radio stations. Some of these community radio stations, they also link up with the BBC or Dutch Orwella to produce information. So the most important thing is that with public media access, we have to strengthen our community radio stations. We have to give them more digital literacy tools and link them up to community network centers, whereby they can be able to download information relevant from big media houses to disseminate to their population. I’m looking at it from a grassroots perspective. We have to know that what does the average common man want in a rural area? He wants to get information on education, health, agriculture. Those are the basic information he needs to live his life and contribute to the well-being. Now, in terms of what Paolo talked about and when you look at disinformation, yes, if you don’t equip community-based public media with the right tools to be able to provide good news and updated news that is not disinformed, what have a lot of people now doing? They get their information, fake news spreads through mainly messaging apps like WhatsApp. So someone just sends a message and it goes viral and it’s fake. But who debunks all this information? Is the community radio saying, oh, that is not true, that X and Y activist has been arrested, blah, blah, blah. It’s not true. This happened and everything. So the strength for meaningful connectivity on information is strengthening our community-based radio stations, and that can be made possible through what I would call big public media like BBC, like Deutsche Welle, who work around different parts of the world. So they have to have partnerships so that with local community radio stations, give them digital literacy tools for this to be achieved. I’ll lastly say that if you look at the global digital compact implementation, one of the key things there is the digital divide. We still have 2.6 billion people in the world that are not connected. And if you want people to be connected, once you equip them with the right information through public media, indirectly they’ll be connected.


Mr. Patrick Leusch: Thank you. Thank you very much, Poncelet. That is a very important point. Just to put a question to understand correctly, what you’re referring to is the, let’s say, the technical infrastructure first, getting more people, more speedy, technically access to information via a policy to provide, let’s say, high-speed Internet. In rural areas, particularly in Africa, do I understand correctly that you are pledging for a speed-up infrastructural approach also to provide this right?


Mr. Poncelet Ileleji: Yes, in a way, yes. But we have to live with the reality on the ground. The reality on the ground is that we are still a long way for achieving meaningful connectivity on broadband in most parts of Africa. That is the reality. But how do you do it is by developing the capacities of community radio stations so that they have that capacity. They have, I mean, linking it up with a community network center. They have Internet connectivity to get good information to disseminate to that community. So when you do that, the information that the average person in a rural setting might not be able to get because he doesn’t have meaningful connectivity through his local radio station, he will be able to get this information because they are equipped. So that’s why I’m linking up that the meaningful access I’m talking about, I’m linking it up with public media. And that public media I’m referencing is what is at the community level. And that’s the community-based radio stations who, again, for world news, for other things, can link up to BBC, Deutsche Welle. You have all these learning platforms. They can link up to these learning platforms to provide other services in education, health, agriculture that people need to have.


Mr. Patrick Leusch: Exactly. That’s absolutely right. And that’s what’s happening, by the way, because we, for instance, we work with partners. We can pick content and distribute that content on our own platforms. We co-develop content. And, by the way, like BBC with the media action, we provide trainings. And that training relates also on shifting, making shifting local media into online reporting and everything that comes with it. So let me turn to you guys here in the room. And let me also ask our online moderator, Oliver Inks, if there is a question that has been put forward so far from those who are connected online.


MODERATOR: Good day to everybody. Thank you. I can’t see any questions in the Zoom chat at the moment, but I do see that Paula has her hand up. So perhaps we should give the floor to her.


Mr. Patrick Leusch: Go ahead, Paula. Only at the condition that there are no more questions, of course, from the audience. I just wanted to kind of go back to what Ponce was saying, because I think it’s very important. And I think there is another two elements. One is we don’t know. I mean, we are not fact checking or we don’t know what is going through private messaging app, which is absolutely correct. I will add another layer, which is we even don’t know the answers that Gen AI is giving to people. So whenever you ask an AI chatbot about something, it is giving you an answer and no one knows it. It is like between you and the chatbot, which is creating an additional element, probably even more scary. And the second element is, and I wanted, I mean, but you, Ponce, are more familiar with the African continent. But I remember some years ago, Meta and Google were sending balloons and drones to provide connectivity in some African regions. And among the condition was the fact that you could only access a limited number of websites, including, of course, Facebook. Then if this is the case, and then if there is lots of disinformation, but also hate speech and so on, on those platforms, then somehow the users, they are locked in, the citizens, because they have the connectivity, but it is by no means meaningful nor safe because you are accessing content, which is disinformation or even worse, illegal speech. So I think this is quite important. And just very quickly, again, on the messaging, I think it is very linked to the urban and rural areas and also to the fact that as human beings, we trust our families, we trust our friends. So it is somehow replicating the word of mouth situation that we also had in the past, but as you rightly say, in a way more scary way because there is still this convincing element that if it comes from the online, it is trustworthy. Thank you very much. We have a question in the room here. Sir, go ahead.


Giacomo Mazzone: Yeah, it’s working. Giacomo Mazzone from Eurovision. I have a question in general to all the speakers that is about the… It seems to us that… The fact-checking is not enough. We need to go towards a more comprehensive approach, more holistic approach. That means having regulation that will help in the negotiation with the platforms in order to be more effective in the work that you do. I know that recently there has been a pledge launched by EBU and the association of other newspapers to the platforms. Can you tell more about that, if you are aware?


Mr. Poncelet Ileleji: If I were to comment, I would first want to talk about when you mentioned regulation. We shouldn’t have regulation on content. It goes against freedom of speech. So immediately you start trying to regulate content, then you are infringing on the rights of people. Yes, there is a moral issue on what kind of content you produce, and we have to be able to fact-check information, and that is why a lot of countries, a lot of organizations are just fact-checking information. And the last thing I will say, it’s also the moral duty for people, like where you see most messages, whether it’s on TikTok or on a WhatsApp messaging app, and you get an information, you just post it to send it to other people without even fact-checking it, and you are supposed to be the educated one. In most cases, most of the people that carry all this hate speech or disinformation are the educated folks, and so we have to do a lot of stuff whereby so-called educated folks are now using all these platforms to misguide the majority of the populace, and we have to work hard in changing that. But to bring about regulation of content is a no-go for me. Thank you.


Mr. Patrick Leusch: Okay, very strong commitment. Thank you very much. Paula, I have this question from Giacomo in mind that relates to the role of the platforms. I can say from my experience, for public broadcasters, and I think you know that very well, Giacomo, because that’s true for public broadcasters as well, or for most of the media, but a little bit different for commercial media than for the public media, I guess, is to play the platforms. I mean, you can’t read the platforms. We don’t know what the platforms are doing with our content. You don’t know what is in the black box. We have expert teams sitting that check what goes from our newsrooms in the black box, and they can check what comes out of the black boxes in terms of audience, and then they guess what the algorithm is doing why with your content, and then they advise on the newsrooms to adapt the content according to their guessing without knowing really what the platform is doing with your content. And obviously, many journalists and content producers are tilting between, we have to get rid of the platforms and how we can play them best, and that is very difficult to play. But from your perspective, because Edmo is really at the heart of assessing also what that means, what is your assessment on that from a regulatory point of view? And I know that we are tilting back a little bit to the European context that will widen up again for the global context in a minute. Paula. Yes, you got a little


Ms. Paula Gori: frozen, but I hope I got the question. So first I wanted just to reinforce what was said. We all get always freezing when we talk about the platforms, you know, that’s normal. No, I just wanted to say that I first wanted to reinforce what was said. Regulations should not be on the content, and this is very important, and this is also, for example, the approach of the Digital Services Act. It’s not on the content, it is on the risks that the way the platform work actually can pose. So this is very important. Now, on what you were saying on content that is on platforms, I think this is the overall point that we are making since years, that there is no transparency in the algorithmic decisions. So we are really not fully aware why we are seeing a given piece of news rather than the other. And let me also say that I’m probably even the platform still don’t know, because as far as I know, it’s an algorithm that works on an algorithm that works on another algorithm. So they somehow tweak it so much that, honestly, I fear that in some occasions they even lost control on these whole tweaks on the algorithms. But clearly what we know is that emotions fuel negative content, and especially negative emotions. So whenever a content is emotionally strong, it is based on fear, on division, on threat, and so on, it becomes more viral. And this is a way to tweak the algorithm. And this, in my opinion, is why unfortunately some media then move to sensationalism content, because it actually moves the algorithm more than a plain information that is without emotions and is not emotionally emotional. But going back to the regulation that we are seeing, I think that we are going, and I was saying that in the previous session, with the global principles that we are seeing, with the Global Digital Compact, with the UNESCO guidelines, and so on, I think we are agreeing on basic principles. And in the EU especially, as I was saying earlier, what we are doing is we are looking at if the way the platform works can actually be mis-abused. Let’s put it in a very simple way. So it’s not about the content. It’s just the way you are working can be mis-abused by malign actors. And this creates risks to public security, public health, civic discourse, and so on. And that’s where the regulation is, because as it was said, we could not get into the content. Thank you very much, Paula, so far.


Audience: So there’s a lady standing since minutes on the microphone. I didn’t want to interrupt. Good answers. Very kind of you. Thank you. The floor is yours. My name is Thora. I’m a PhD researcher coming out of Iceland. I’m studying how VLOPS and VLOCE, very large platforms, are undermining democracy in the EEA. And my problem is, of course, scarcity of data and the black box. Now, I have a 20-year experience working in IT and building large systems. So in my mind, I see it. I see the black box. But as an academic, now the DSA is supposed to give me access, but it is not. The platforms are dragging their feet. And I, again, am asking here and sort of lobbying on behalf of academia, are you guys doing any academic work and demanding data through the DSA? If not, what is hindering it? And what can we do to fix this problem so we don’t always are theorizing with the law? Then comes the black box and we are studying the outcome, because that’s, of course,


Mr. Patrick Leusch: a futile thing. Thank you. Thora, thank you very much. That’s really the right question at that moment, because I was just about to try to link the different aspects that we have had now, right? So the access to this data is one regulatory thing, and the DSA is at the heart of it. And we understand that the EU is slow in, you know, pushing that forward. Maybe there is an overarching political item in it, tax or something like that, I don’t know exactly. But the question is the following, and this question goes to Poncelet and goes also to Paula, but also to you guys here in the room and online. So we have touched based on the censorship issue at the beginning, which is it’s a part of meaningful access to digital access, right? Then Poncelet has spoken about the challenges that you see in Africa, for instance, which lay partly on another level. I’m not saying there’s no censorship, there is, but meaningful access means much more and leads to skills of media, but also to technical development. The access to data on the large platforms and the regulatory questions that come with it is another aspect. So I have at least three different elements which are not easy to link together when we asked what needs to be done policy-wise or on a regulatory basis to push these challenges forward. So what do you think where to attack? You mentioned, Paula, the AMFA, for instance, and the DSA has been mentioned. So what I’m saying, there are things in place, why aren’t they working and what needs to be done to make them better work, right? UNESCO initiative, the Global Compact, all this has been mentioned. So tell us where to work on to make them better perform all these elements that are in place.


Mr. Abdallah Alsalmi: I would like to look at the issue of the international human rights aspect of access. Article 19 is really, I would say, is really outdated and we need to have another look at it, update it, and renew commitments to it. The other issue is that the multi-stakeholder model, since we are at the IGF, it’s good to mention it. We have been, for a number of years, we have been hearing a lot about support for the multi-stakeholder model of governing the internet. Oftentimes it comes as a response by some civil society groups and some governments to the efforts by particular governments to try and reshape the internet as we know it. We really need to energize efforts working towards a real multi-stakeholder model. My idea is that we can start with the local branch of your IGF by trying to build coalition, talk to your government, and try to push for an internet that is really open and in a way regulated to protect its current openness and the fact that it has no borders. We really cannot continue to work on this legal loophole. I’m going to make this comparison now. If you look at shortwave radio and satellite TV, they are protected by rules of the International Communications Union, so governments cannot jam them. Governments cannot disrupt these broadcasting technologies. But the internet now, it’s not protected. Any government can shut down the internet at any time without due recourse to legal background or text. Any government can block websites, again, at will, with no questions and no justification provided. So, my call to action is about re-energizing the local IGF forums and to start from that point.


Mr. Patrick Leusch: Thank you very much. If I may just follow up on data access. A tepid statement, particularly saying Article 19 is outdated. So, over to Paola.


Ms. Paula Gori: On data access, quickly, or I would love to have more time. So, first of all, you are completely right. The current policy framework establishes actually the obligation that the platforms give access to researchers, to independent researchers, both access to public and to private data. What is still missing is the Delegated Act. So, the Commission should publish the Delegated Act, which makes this operational, and then there should be no excuses. So, we really hope that this will be active. What Aetmo did already years ago, we did a legal analysis on whether having access to private data would infringe GDPR. And you can find on our website a good report saying that actually it is fine to get access to those data. We are also covered on that side, and we actually even worked on an independent intermediary body that could work in between digital services coordinators, so kind of with the digital services coordinators in between researchers and platforms. So, this is, I fully agree with you. Just one thing I wanted to say is that once we will get access to those data, there will be two main issues. The first is, will all organizations be equipped financially and infrastructurally to deal with all this data? Because there is a risk of two-speed academic institutions and civil society organizations. The big and rich ones, they will make it. The smaller ones not, which is an issue also if you look at some specific countries, not only in Europe, but now the DSA is for Europe, so think of, for example, the Eastern countries and so on. But also countries like Italy, I’m not sure if many universities would be able to do that. The second, policymakers shall be ready, because if we really access those data, we will understand so many things about this information, and also about the impact, that probably we will have to change the whole policy framework once we will have the knowledge of what is happening really online, because it’s only through those data that we will have those knowledge. So, just those two points to quickly close. Thank you very much, Paula.


Mr. Poncelet Ileleji: Oh, yeah. If I look at it, I totally agree with what Paula said and Abdullah. I will say the multistakeholder process is the key to all we are discussing here, especially you look at data governance, like the PhD researcher talked, data governance has now become a key component in all the work we do. But this multistakeholder process involves us being able to dialogue with our governments, with civil society, with academia, with legal people, with technical people, so we have to sit with equity and hear each other out and agree to disagree. If we don’t do that, starting at a grassroots level, we will continue to be dividing ourselves, and trying to, instead of building an Internet that is not fragmented, we will continue to have fragmentations at various levels, and that is why disinformation now is now a big thing. If you go back to 2006, Time Person of the Year in 2006 was you. That you, when Times Magazine made the you, us, as Person of the Year, still applies today, and we have to know how to, the information we give out, we have to know that it is correct and it is impactful to our society, and that is the key of this session we have had today. And it also links up with meaningful connectivity. We should never forget that the majority of the people that want to impact lives, they don’t have connectivity. Look at it, 2.6 billion people, according to the statistics from the ITU. So let us go back to the basics and try to use our public media, especially those at the grassroots level, equip them well to build the world we want, that we get better information for socio-economic development. Thank you.


Mr. Patrick Leusch: Thank you very much for that strong pledge, Ponceled. Last question from my side, two minutes left, very short one. What is the biggest block we have to move away to go that path, to strengthen the multi-stakeholder approach, to look deeper into the challenges that are in the digital divide area, and to come up with a better version of, I put it very simple, with a better version of Article 19. What is the biggest block you have to move away to go that path?


Mr. Poncelet Ileleji: If I start, I have one simple equation. The global digital compact implementation plus WSIS recrafted, we are coming to 20 years of the WSIS, the World Summit on Information Society, is the WSIS that led to the IGF. If we have this global GDC implementation plus the WSIS recrafted, in my mind in July, it equates to a stronger and strengthened IGF that will improve lives.


Mr. Patrick Leusch: Thank you very much. Very precise. Abdallah?


Mr. Abdallah Alsalmi: I mean, I agree with Poncelet here about the importance of the huge pledges made within the global digital compact, as well as the upcoming review of WSIS. My main concern here is that we rely too much on governments, and as we can see in democracies, you might end up with a government that doesn’t like what you’re doing, and as such they either oppose what you’re doing, or they don’t help you. I look at the example of the recent ruling by the Supreme Court in India, which made a landmark verdict regarding the access to digital life as part of the individual’s own right to life. I think the civil society could start really by working at the local level, and if, with the governments, if the government doesn’t lend an ear to them, then they can always work towards the judiciary sector, and find some supporters in other aspects of their societies.


Mr. Patrick Leusch: Thank you very much, Abdallah. Nine seconds for you, Paula.


Ms. Paula Gori: Okay, so I fully endorse what was said. I would just say, change the narrative, change the way we put this whole conversation, let it make attractive also for those who don’t believe in democracy, and on the other side, involve municipalities. I think they could play a key role here. They are way more close to citizens, and they could be very active in this field.


Mr. Patrick Leusch: Thank you. Thank you very much. I would like to thank my panellists, Poncelet here on stage, Paula and Abdallah remotely. Thank you very much for your insights, for that great discussion. Thank you all, the audience here, for your questions and comments, and for participating online to this session on meaningful digital access and what role for PSM public service media in it. And thanks to Flora and her team for this great framing here, organising the technical means for this session. Thank you very much. Thank you. Thank you.


M

Mr. Abdallah Alsalmi

Speech speed

145 words per minute

Speech length

1392 words

Speech time

573 seconds

Meaningful digital access goes beyond simple connectivity to include reliability, affordability, appropriate devices, digital literacy, relevant content in local languages, and safe digital environments

Explanation

Alsalmi argues that meaningful digital access is a comprehensive concept that encompasses multiple elements beyond just having an internet connection. He emphasizes that it’s about the quality of the internet experience and what users can actually accomplish online, not just technical connectivity.


Evidence

Example comparing someone who can only send text messages on WhatsApp over 2G connection versus someone with super fast broadband using Apple’s latest VR headset. References Alliance for Affordable Internet’s work on this concept.


Major discussion point

Meaningful Digital Access and Connectivity Concepts


Topics

Development | Human rights | Infrastructure


UN’s Universal Meaningful Connectivity (UMC) is a development goal with specific metrics, while meaningful digital access is an outcome focused on user experience quality

Explanation

Alsalmi distinguishes between UMC as a policy goal that organizations like ITU work toward with governments and civil society, versus meaningful digital access which represents the actual outcome and experience quality. Both aim to enable people to use internet for jobs, family communication, and free expression.


Evidence

References ITU data hub dashboard showing country scores from 40-50 (limited) to 95-200 (target achieved). Mentions specific metrics about daily data usage and internet purposes (business, social networking, job searching).


Major discussion point

Meaningful Digital Access and Connectivity Concepts


Topics

Development | Legal and regulatory | Infrastructure


Article 19 of international human rights law is outdated and needs renewal, with stronger commitments to protect internet openness unlike current legal loopholes

Explanation

Alsalmi argues that current international human rights frameworks are insufficient to protect internet access and that governments can shut down or block internet content without legal recourse. He calls for updated international commitments and legal protections similar to those that exist for shortwave radio and satellite TV.


Evidence

Comparison with shortwave radio and satellite TV which are protected by International Communications Union rules preventing government jamming, while internet has no such protections allowing governments to shut down internet or block websites at will.


Major discussion point

Policy and Regulatory Framework Improvements


Topics

Human rights | Legal and regulatory | Infrastructure


Multi-stakeholder governance model needs energizing through local IGF forums and coalition building to prevent internet fragmentation

Explanation

Alsalmi advocates for strengthening the multi-stakeholder model of internet governance by starting at local levels through IGF forums. He sees this as a response to efforts by some governments to reshape the internet and as a way to maintain an open, borderless internet.


Evidence

References the IGF context and mentions building coalitions to talk to governments and push for open internet regulation that protects current openness.


Major discussion point

Policy and Regulatory Framework Improvements


Topics

Legal and regulatory | Human rights | Infrastructure


Agreed with

– Mr. Poncelet Ileleji

Agreed on

Multi-stakeholder governance model is essential for internet governance and addressing digital challenges


Civil society should work at local levels and engage judiciary systems when governments don’t support digital rights initiatives

Explanation

Alsalmi suggests that when governments don’t support digital rights efforts, civil society should turn to judicial systems for support. He emphasizes the importance of not relying too heavily on governments since they may change and oppose digital rights work.


Evidence

Cites recent Supreme Court ruling in India that recognized access to digital life as part of individual’s right to life as a landmark example of judicial support for digital rights.


Major discussion point

Policy and Regulatory Framework Improvements


Topics

Human rights | Legal and regulatory | Development


M

Mr. Poncelet Ileleji

Speech speed

150 words per minute

Speech length

1281 words

Speech time

509 seconds

Community radio stations need strengthening with digital literacy tools and partnerships with international broadcasters to serve rural populations effectively

Explanation

Ileleji argues that in sub-Saharan Africa where only 37% have broadband connectivity, community radio stations are crucial information sources for rural populations. He advocates for strengthening these stations through digital literacy training and partnerships with major international broadcasters like BBC and Deutsche Welle.


Evidence

Statistics showing 37% broadband connectivity in sub-Saharan Africa. Example from Gambia where rural areas get information from community radio stations that link with BBC or Deutsche Welle. Mentions people need information on education, health, and agriculture.


Major discussion point

Meaningful Digital Access and Connectivity Concepts


Topics

Development | Sociocultural | Infrastructure


Infrastructure development must be realistic – focus on equipping community-based media with internet connectivity to disseminate information to those without meaningful broadband access

Explanation

Ileleji acknowledges that achieving meaningful broadband connectivity across Africa will take time, so proposes a practical interim solution. He suggests equipping community radio stations with internet connectivity and linking them to community network centers so they can access and disseminate quality information to their communities.


Evidence

References the reality that meaningful broadband connectivity is still far away in most parts of Africa. Mentions linking community radio stations to community network centers and learning platforms.


Major discussion point

Meaningful Digital Access and Connectivity Concepts


Topics

Development | Infrastructure | Sociocultural


Community radio stations play a crucial role in debunking fake news that spreads through messaging apps like WhatsApp in rural areas

Explanation

Ileleji explains that without proper information sources, fake news spreads rapidly through messaging apps in rural communities. Community radio stations serve as trusted local sources that can fact-check and debunk misinformation, providing accurate information about local events and issues.


Evidence

Example of fake news spreading through WhatsApp about activist arrests, with community radio stations providing corrections and accurate information about what actually happened.


Major discussion point

Disinformation and Public Service Media Role


Topics

Sociocultural | Human rights | Development


Content regulation should be avoided as it infringes on freedom of speech; focus should be on fact-checking and moral responsibility of users

Explanation

Ileleji strongly opposes content regulation, arguing it violates freedom of speech rights. Instead, he advocates for fact-checking mechanisms and emphasizes the moral responsibility of educated users who often spread disinformation through social platforms without verification.


Evidence

Points out that educated people are often the ones spreading hate speech and disinformation on platforms like TikTok and WhatsApp without fact-checking before sharing.


Major discussion point

Platform Regulation and Algorithm Transparency


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Ms. Paula Gori

Agreed on

Content regulation should be avoided in favor of other approaches to address disinformation


Disagreed with

– Giacomo Mazzone

Disagreed on

Content regulation approach


Global Digital Compact implementation combined with WSIS review could strengthen IGF and improve lives globally

Explanation

Ileleji proposes that implementing the Global Digital Compact alongside a recrafted World Summit on Information Society (approaching its 20-year anniversary) would create a stronger IGF framework. He sees this combination as key to addressing digital divide challenges and improving global connectivity.


Evidence

References the upcoming 20-year anniversary of WSIS and notes that WSIS led to the creation of IGF. Mentions 2.6 billion people still lacking connectivity according to ITU statistics.


Major discussion point

Policy and Regulatory Framework Improvements


Topics

Legal and regulatory | Development | Infrastructure


Agreed with

– Mr. Abdallah Alsalmi

Agreed on

Multi-stakeholder governance model is essential for internet governance and addressing digital challenges


M

Ms. Paula Gori

Speech speed

174 words per minute

Speech length

1908 words

Speech time

657 seconds

Public service media serves as a solution to tackle disinformation, particularly during crisis situations like COVID-19 when people seek reliable information sources

Explanation

Gori argues that public service media plays a crucial role in combating disinformation, especially during crises when people desperately need trustworthy information. She notes that during COVID-19, people increasingly turned to public service media for reliable information despite also accessing disinformation online.


Evidence

COVID-19 pandemic example where people accessed public service media more than before because they were seeking reliable information during uncertainty, even while disinformation was also prevalent online.


Major discussion point

Disinformation and Public Service Media Role


Topics

Sociocultural | Human rights | Development


Agreed with

– Mr. Patrick Leusch

Agreed on

Public service media plays a crucial role during crisis situations


Transparency in ownership, structure, and funding of public service media is essential for building citizen trust and reliability

Explanation

Gori emphasizes that for public service media to be effective, citizens must understand who owns them, how they’re structured, and how they’re funded. This transparency is crucial for building trust and allowing citizens to make informed choices about their information sources, as mandated by frameworks like the European Media Freedom Act.


Evidence

References the European Media Freedom Act requirements for transparency in PSM ownership, structure, and funding. Emphasizes that choice remains with users/citizens who need this information to trust sources.


Major discussion point

Disinformation and Public Service Media Role


Topics

Legal and regulatory | Human rights | Sociocultural


Public service media must adopt more innovative and positive approaches to content production to remain attractive to audiences

Explanation

Gori acknowledges that traditional media, including public service media, face challenges in their business models and content approach. She argues that PSM must innovate in content production and sharing while avoiding sensationalism to maintain audience interest in quality journalism.


Evidence

Mentions conversations with PSM journalists who acknowledge a ‘mea culpa’ about needing more innovative approaches. Cites BBC and Deutsche Welle as positive examples investing in new content production and sharing methods while avoiding sensationalism.


Major discussion point

Disinformation and Public Service Media Role


Topics

Sociocultural | Economic | Development


Regulation should target risks posed by platform operations rather than content itself, as demonstrated by the EU’s Digital Services Act approach

Explanation

Gori advocates for regulation that focuses on the risks created by how platforms operate rather than regulating content directly. She explains that the Digital Services Act approach examines whether platform operations can be misused by malign actors to create risks to public security, health, and civic discourse.


Evidence

References the Digital Services Act as an example of risk-based regulation rather than content regulation. Explains focus on risks to public security, public health, and civic discourse from platform operational methods.


Major discussion point

Platform Regulation and Algorithm Transparency


Topics

Legal and regulatory | Human rights | Cybersecurity


Agreed with

– Mr. Poncelet Ileleji

Agreed on

Content regulation should be avoided in favor of other approaches to address disinformation


Lack of algorithmic transparency prevents understanding of why certain content is promoted, with platforms potentially losing control over their own complex algorithmic systems

Explanation

Gori highlights the problem of algorithmic opacity, explaining that neither users nor possibly even platforms themselves fully understand how algorithmic decisions are made. She suggests that platforms may have lost control over their own systems due to excessive tweaking of algorithms built upon other algorithms.


Evidence

Explains that algorithms work on algorithms that work on other algorithms, with so much tweaking that platforms may have lost control. Notes that negative emotions fuel content virality, leading some media toward sensationalism.


Major discussion point

Platform Regulation and Algorithm Transparency


Topics

Legal and regulatory | Sociocultural | Human rights


Municipalities should be involved as key players closer to citizens who can be active in digital rights advocacy

Explanation

Gori suggests that local municipalities should play a more active role in digital rights and meaningful connectivity issues because they are closer to citizens than national governments. She sees them as potentially more responsive and effective advocates for citizen needs in the digital space.


Major discussion point

Policy and Regulatory Framework Improvements


Topics

Legal and regulatory | Development | Human rights


M

Mr. Patrick Leusch

Speech speed

134 words per minute

Speech length

3652 words

Speech time

1625 seconds

International broadcasters invest in censorship circumvention technologies to reach audiences in countries with limited press freedom, sharing research with other independent media

Explanation

Leusch explains that public service media like Deutsche Welle and BBC invest significantly in understanding and circumventing censorship to reach audiences in countries with restricted information access. This research and technology is shared not only between major broadcasters but also with exile media and independent outlets.


Evidence

Deutsche Welle blocked in China, Iran, Egypt, Belarus, Russia, Turkey. Started circumvention work in 2012. Mentions collaboration with BBC and sharing with exile media like Medusa for Russia. Reaches millions weekly in Iran despite censorship.


Major discussion point

Internet Censorship and Circumvention Technologies


Topics

Human rights | Infrastructure | Cybersecurity


Internet censorship is technically and politically complex, requiring permanent adaptation of mitigation measures as censorship methods vary by country and policy

Explanation

Leusch describes internet censorship as a complex, evolving challenge that requires constant adaptation. He explains that censorship methods differ significantly between countries and policies, requiring a ‘cat and mouse’ approach to develop appropriate countermeasures for each specific situation.


Evidence

Compares Iranian censorship (efficient but built on originally open internet), Chinese censorship (inner internet from beginning), and Russian censorship (step-by-step testing approach). Mentions variety of technologies, policies, and methods used.


Major discussion point

Internet Censorship and Circumvention Technologies


Topics

Cybersecurity | Human rights | Infrastructure


Circumvention work is legally justified under Article 19 of the UN Human Rights Charter regarding access to information, which supersedes national censorship laws

Explanation

Leusch addresses the legal and ethical questions surrounding circumvention work by public broadcasters. He explains that German Bundestag Legal Service confirmed that Article 19 of the UN Human Rights Charter on access to information provides legal basis for this work, as international law supersedes national censorship laws.


Evidence

German Bundestag Legal Service research confirming Article 19 as legal basis. Notes that countries engaging in censorship have signed the UN Human Rights Charter, making international law applicable over national censorship laws.


Major discussion point

Internet Censorship and Circumvention Technologies


Topics

Human rights | Legal and regulatory | Infrastructure


Crisis situations drive increased demand for alternative information sources, as evidenced by usage spikes during protests and political upheavals

Explanation

Leusch demonstrates that during times of crisis, internet shutdowns, or major political events, people in censored countries actively seek alternative information sources. This pattern shows the critical importance of maintaining access to independent media during crucial moments.


Evidence

Charts showing usage spikes during Iran protests two years ago, Prigozhin coup attempt in Russia, and when Navalny died. Clear correlation between crisis events and increased circumvention tool usage.


Major discussion point

Internet Censorship and Circumvention Technologies


Topics

Human rights | Sociocultural | Infrastructure


Agreed with

– Ms. Paula Gori

Agreed on

Public service media plays a crucial role during crisis situations


A

Audience

Speech speed

156 words per minute

Speech length

181 words

Speech time

69 seconds

Academic researchers need better access to platform data through proper implementation of DSA provisions to study platform impacts on democracy

Explanation

An academic researcher (Thora) studying how large platforms undermine democracy in the EEA argues that the Digital Services Act should provide data access but platforms are not complying. She emphasizes that without this data, academic research remains limited to theorizing about inputs and studying outcomes without understanding the ‘black box’ operations.


Evidence

PhD research on VLOPS and VLOCE undermining democracy, 20-year IT experience, platforms dragging feet on DSA data access requirements, current research limited to studying outcomes rather than processes.


Major discussion point

Platform Regulation and Algorithm Transparency


Topics

Legal and regulatory | Human rights | Development


M

MODERATOR

Speech speed

170 words per minute

Speech length

37 words

Speech time

13 seconds

Academic researchers need better access to platform data through proper implementation of DSA provisions to study platform impacts on democracy

Explanation

The moderator facilitated a question from an academic researcher (Thora) studying how large platforms undermine democracy in the EEA, who argued that the Digital Services Act should provide data access but platforms are not complying. She emphasized that without this data, academic research remains limited to theorizing about inputs and studying outcomes without understanding the ‘black box’ operations.


Evidence

PhD research on VLOPS and VLOCE undermining democracy, 20-year IT experience, platforms dragging feet on DSA data access requirements, current research limited to studying outcomes rather than processes.


Major discussion point

Platform Regulation and Algorithm Transparency


Topics

Legal and regulatory | Human rights | Development


G

Giacomo Mazzone

Speech speed

110 words per minute

Speech length

99 words

Speech time

53 seconds

Fact-checking alone is insufficient and requires a more comprehensive, holistic approach including regulation to negotiate effectively with platforms

Explanation

Mazzone argues that current fact-checking efforts are not adequate to address disinformation and platform-related challenges. He advocates for a broader approach that includes regulatory frameworks to strengthen negotiations with platforms and make content verification efforts more effective.


Evidence

References a recent pledge launched by EBU and newspaper associations to platforms, suggesting coordinated industry efforts to address platform accountability.


Major discussion point

Platform Regulation and Algorithm Transparency


Topics

Legal and regulatory | Human rights | Sociocultural


Disagreed with

– Mr. Poncelet Ileleji

Disagreed on

Content regulation approach


Agreements

Agreement points

Multi-stakeholder governance model is essential for internet governance and addressing digital challenges

Speakers

– Mr. Abdallah Alsalmi
– Mr. Poncelet Ileleji

Arguments

Multi-stakeholder governance model needs energizing through local IGF forums and coalition building to prevent internet fragmentation


Global Digital Compact implementation combined with WSIS review could strengthen IGF and improve lives globally


Summary

Both speakers strongly advocate for strengthening multi-stakeholder approaches to internet governance, with Alsalmi emphasizing local IGF forums and coalition building, while Ileleji proposes combining Global Digital Compact implementation with WSIS review to strengthen the IGF framework.


Topics

Legal and regulatory | Human rights | Infrastructure


Content regulation should be avoided in favor of other approaches to address disinformation

Speakers

– Ms. Paula Gori
– Mr. Poncelet Ileleji

Arguments

Regulation should target risks posed by platform operations rather than content itself, as demonstrated by the EU’s Digital Services Act approach


Content regulation should be avoided as it infringes on freedom of speech; focus should be on fact-checking and moral responsibility of users


Summary

Both speakers reject direct content regulation as a solution, with Gori advocating for risk-based regulation of platform operations and Ileleji emphasizing that content regulation violates freedom of speech principles.


Topics

Legal and regulatory | Human rights


Public service media plays a crucial role during crisis situations

Speakers

– Ms. Paula Gori
– Mr. Patrick Leusch

Arguments

Public service media serves as a solution to tackle disinformation, particularly during crisis situations like COVID-19 when people seek reliable information sources


Crisis situations drive increased demand for alternative information sources, as evidenced by usage spikes during protests and political upheavals


Summary

Both speakers recognize that public service media becomes particularly important during crises, with Gori noting increased reliance during COVID-19 and Leusch providing evidence of usage spikes during political upheavals and protests.


Topics

Human rights | Sociocultural | Infrastructure


Similar viewpoints

Both speakers reference Article 19 of international human rights law as fundamental to internet access rights, though Alsalmi argues it needs updating while Leusch uses it as current legal justification for circumvention work.

Speakers

– Mr. Abdallah Alsalmi
– Mr. Patrick Leusch

Arguments

Article 19 of international human rights law is outdated and needs renewal, with stronger commitments to protect internet openness unlike current legal loopholes


Circumvention work is legally justified under Article 19 of the UN Human Rights Charter regarding access to information, which supersedes national censorship laws


Topics

Human rights | Legal and regulatory


Both speakers recognize the challenge of misinformation spread through digital platforms and the need for trusted sources to counter it, though they focus on different solutions – Gori on algorithmic transparency and Ileleji on community radio fact-checking.

Speakers

– Ms. Paula Gori
– Mr. Poncelet Ileleji

Arguments

Lack of algorithmic transparency prevents understanding of why certain content is promoted, with platforms potentially losing control over their own complex algorithmic systems


Community radio stations play a crucial role in debunking fake news that spreads through messaging apps like WhatsApp in rural areas


Topics

Sociocultural | Human rights | Development


Both speakers emphasize the importance of local-level approaches and working with available resources rather than waiting for top-down solutions, whether through local civil society engagement or community-based infrastructure development.

Speakers

– Mr. Abdallah Alsalmi
– Mr. Poncelet Ileleji

Arguments

Civil society should work at local levels and engage judiciary systems when governments don’t support digital rights initiatives


Infrastructure development must be realistic – focus on equipping community-based media with internet connectivity to disseminate information to those without meaningful broadband access


Topics

Development | Human rights | Infrastructure


Unexpected consensus

Opposition to direct content regulation despite different professional backgrounds

Speakers

– Ms. Paula Gori
– Mr. Poncelet Ileleji

Arguments

Regulation should target risks posed by platform operations rather than content itself, as demonstrated by the EU’s Digital Services Act approach


Content regulation should be avoided as it infringes on freedom of speech; focus should be on fact-checking and moral responsibility of users


Explanation

Despite Gori working in European policy frameworks that could support regulation and Ileleji working in African development contexts, both strongly oppose direct content regulation, showing unexpected alignment across different regional and professional perspectives on fundamental free speech principles.


Topics

Legal and regulatory | Human rights


Acknowledgment of public service media limitations and need for improvement

Speakers

– Ms. Paula Gori
– Mr. Patrick Leusch

Arguments

Public service media must adopt more innovative and positive approaches to content production to remain attractive to audiences


International broadcasters invest in censorship circumvention technologies to reach audiences in countries with limited press freedom, sharing research with other independent media


Explanation

Both speakers, while advocating for public service media, acknowledge its current limitations and need for adaptation – Gori noting the need for innovation to remain attractive, and Leusch describing the extensive technical efforts required to reach audiences, showing realistic assessment rather than defensive positioning.


Topics

Sociocultural | Human rights | Infrastructure


Overall assessment

Summary

The speakers demonstrated strong consensus on fundamental principles including the importance of multi-stakeholder governance, opposition to direct content regulation, the crucial role of public service media during crises, and the need for local-level approaches to digital challenges. They also shared realistic assessments of current limitations and the need for innovative solutions.


Consensus level

High level of consensus on core principles with complementary rather than conflicting approaches. The agreement spans technical, policy, and implementation perspectives, suggesting a mature understanding of the complex challenges in meaningful digital access. This consensus provides a strong foundation for collaborative action across different sectors and regions, though implementation details may require further coordination.


Differences

Different viewpoints

Content regulation approach

Speakers

– Mr. Poncelet Ileleji
– Giacomo Mazzone

Arguments

Content regulation should be avoided as it infringes on freedom of speech; focus should be on fact-checking and moral responsibility of users


Fact-checking alone is insufficient and requires a more comprehensive, holistic approach including regulation to negotiate effectively with platforms


Summary

Ileleji strongly opposes any content regulation as a violation of freedom of speech, advocating instead for fact-checking and user responsibility. Mazzone argues that fact-checking alone is inadequate and calls for more comprehensive regulatory approaches including platform regulation.


Topics

Human rights | Legal and regulatory | Sociocultural


Unexpected differences

Scope of regulatory intervention needed

Speakers

– Mr. Poncelet Ileleji
– Giacomo Mazzone

Arguments

Content regulation should be avoided as it infringes on freedom of speech; focus should be on fact-checking and moral responsibility of users


Fact-checking alone is insufficient and requires a more comprehensive, holistic approach including regulation to negotiate effectively with platforms


Explanation

This disagreement is unexpected because both speakers are concerned about disinformation and platform accountability, yet they have fundamentally different views on the role of regulation. Ileleji, coming from an African development perspective, takes a strong anti-regulation stance emphasizing individual responsibility, while Mazzone, representing European broadcasting interests, advocates for stronger regulatory frameworks. This suggests different regional or institutional perspectives on balancing freedom of expression with platform accountability.


Topics

Human rights | Legal and regulatory | Sociocultural


Overall assessment

Summary

The discussion revealed relatively limited but significant disagreements, primarily centered on regulatory approaches to content and platforms. While speakers largely agreed on fundamental goals like combating disinformation, ensuring meaningful digital access, and strengthening multi-stakeholder governance, they differed on implementation mechanisms and the appropriate level of regulatory intervention.


Disagreement level

Moderate disagreement with significant implications. The main tension between pro-regulation and anti-regulation approaches reflects broader global debates about internet governance, freedom of expression, and platform accountability. These disagreements could impact policy development, as they represent different philosophical approaches to addressing digital challenges – one emphasizing regulatory frameworks and institutional solutions, the other prioritizing individual responsibility and minimal intervention. The disagreements also reflect different regional perspectives and institutional contexts, which could complicate international cooperation on digital governance issues.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers reference Article 19 of international human rights law as fundamental to internet access rights, though Alsalmi argues it needs updating while Leusch uses it as current legal justification for circumvention work.

Speakers

– Mr. Abdallah Alsalmi
– Mr. Patrick Leusch

Arguments

Article 19 of international human rights law is outdated and needs renewal, with stronger commitments to protect internet openness unlike current legal loopholes


Circumvention work is legally justified under Article 19 of the UN Human Rights Charter regarding access to information, which supersedes national censorship laws


Topics

Human rights | Legal and regulatory


Both speakers recognize the challenge of misinformation spread through digital platforms and the need for trusted sources to counter it, though they focus on different solutions – Gori on algorithmic transparency and Ileleji on community radio fact-checking.

Speakers

– Ms. Paula Gori
– Mr. Poncelet Ileleji

Arguments

Lack of algorithmic transparency prevents understanding of why certain content is promoted, with platforms potentially losing control over their own complex algorithmic systems


Community radio stations play a crucial role in debunking fake news that spreads through messaging apps like WhatsApp in rural areas


Topics

Sociocultural | Human rights | Development


Both speakers emphasize the importance of local-level approaches and working with available resources rather than waiting for top-down solutions, whether through local civil society engagement or community-based infrastructure development.

Speakers

– Mr. Abdallah Alsalmi
– Mr. Poncelet Ileleji

Arguments

Civil society should work at local levels and engage judiciary systems when governments don’t support digital rights initiatives


Infrastructure development must be realistic – focus on equipping community-based media with internet connectivity to disseminate information to those without meaningful broadband access


Topics

Development | Human rights | Infrastructure


Takeaways

Key takeaways

Meaningful digital access requires going beyond basic connectivity to include reliability, affordability, appropriate devices, digital literacy, relevant local content, and safe digital environments


Public service media plays a crucial role in combating disinformation and providing reliable information, especially during crisis situations


Community radio stations are essential for reaching rural populations in developing countries and need strengthening through partnerships with international broadcasters


Internet censorship is a growing global challenge requiring sophisticated circumvention technologies, with legal justification under Article 19 of UN Human Rights Charter


Platform algorithm transparency is lacking, preventing understanding of content promotion mechanisms and creating risks for democratic discourse


Current regulatory frameworks like Article 19 are outdated and need updating to address modern internet governance challenges


Multi-stakeholder governance models need strengthening through local IGF forums and coalition building to prevent internet fragmentation


Resolutions and action items

Re-energize local IGF forums to build coalitions and push governments for open internet policies


Strengthen community radio stations with digital literacy tools and partnerships with international broadcasters


Implement proper data access provisions under the Digital Services Act for academic researchers


Work toward updating Article 19 of international human rights law to address modern digital access challenges


Combine Global Digital Compact implementation with WSIS review to strengthen IGF


Engage civil society at local levels and work with judiciary systems when governments don’t support digital rights


Involve municipalities as key players in digital rights advocacy due to their proximity to citizens


Unresolved issues

How to effectively balance content regulation with freedom of speech concerns


Addressing the financial and infrastructural capacity gaps between large and small organizations when accessing platform data


Determining what policy changes will be needed once full platform data access reveals the true extent of disinformation impacts


Resolving the tension between platform dependence and editorial independence for public service media


Bridging the digital divide for 2.6 billion people still without internet connectivity


Establishing effective mechanisms to prevent government internet shutdowns and website blocking


Creating sustainable funding models for circumvention technologies and mirror servers


Suggested compromises

Focus regulation on platform operational risks rather than content to preserve freedom of speech while addressing harmful effects


Develop independent intermediary bodies to facilitate data access between platforms, regulators, and researchers


Combine infrastructure development with community media strengthening as a realistic approach to meaningful connectivity in underserved areas


Balance transparency requirements for public service media with operational security needs for circumvention activities


Engage multiple stakeholders (government, civil society, academia, technical experts) in dialogue while accepting that parties may ‘agree to disagree’ on some issues


Thought provoking comments

We need to go beyond just simple connectivity and beyond just having a device that is connected to the internet because it’s all about the experience, it’s all about what the internet users can make of the internet.

Speaker

Mr. Abdallah Alsalmi


Reason

This comment reframes the entire discussion by distinguishing between mere technical access and meaningful digital experience. It introduces the crucial concept that connectivity without context, skills, and relevant content is insufficient for true digital inclusion.


Impact

This foundational insight set the framework for the entire discussion, leading other speakers to build upon this distinction throughout the session. It shifted the conversation from technical infrastructure to human-centered outcomes and user experience.


I like to look at it from a grassroots level. What information do community radios are able to provide to their citizens? In most of the cases, you look at sub-Saharan Africa, for example, my beloved continent, where only about 37% of the population have broadband connectivity.

Speaker

Mr. Poncelet Ileleji


Reason

This comment challenges the discussion’s implicit focus on high-tech solutions by grounding it in real-world constraints. It highlights how meaningful access must work within existing infrastructure limitations and emphasizes the continued importance of traditional media as bridges to digital access.


Impact

This perspective fundamentally shifted the discussion from theoretical policy frameworks to practical implementation challenges. It forced other participants to consider how solutions must be adapted to different technological and economic contexts, leading to more nuanced policy recommendations.


We even don’t know the answers that Gen AI is giving to people. So whenever you ask an AI chatbot about something, it is giving you an answer and no one knows it. It is like between you and the chatbot, which is creating an additional element, probably even more scary.

Speaker

Ms. Paula Gori


Reason

This observation introduces an entirely new dimension to the information access problem that hadn’t been previously discussed. It highlights how AI systems create invisible information silos that are even more opaque than social media algorithms.


Impact

This comment expanded the scope of the discussion beyond traditional censorship and platform algorithms to include AI-mediated information access. It added a new layer of complexity to the meaningful access challenge and influenced the conversation toward more comprehensive regulatory approaches.


Article 19 is really, I would say, is really outdated and we need to have another look at it, update it, and renew commitments to it… Any government can shut down the internet at any time without due recourse to legal background or text.

Speaker

Mr. Abdallah Alsalmi


Reason

This is a bold critique of fundamental international human rights law, arguing that existing legal frameworks are inadequate for the digital age. It challenges participants to think beyond current legal structures and consider more fundamental reforms.


Impact

This comment shifted the discussion from operational challenges to fundamental legal and rights-based frameworks. It elevated the conversation to question basic assumptions about how digital rights should be protected internationally, leading to discussions about multi-stakeholder governance and the need for new international agreements.


We shouldn’t have regulation on content. It goes against freedom of speech. So immediately you start trying to regulate content, then you are infringing on the rights of people.

Speaker

Mr. Poncelet Ileleji


Reason

This comment introduces a crucial tension in the discussion by firmly establishing the boundary between acceptable and unacceptable regulatory approaches. It forces the conversation to grapple with the fundamental conflict between combating misinformation and preserving free speech.


Impact

This strong position created a defining moment in the discussion, forcing other participants to clarify their regulatory proposals and distinguish between content regulation and platform behavior regulation. It led to more nuanced discussions about risk-based rather than content-based approaches to platform governance.


If we really access those data, we will understand so many things about disinformation, and also about the impact, that probably we will have to change the whole policy framework once we will have the knowledge of what is happening really online.

Speaker

Ms. Paula Gori


Reason

This comment reveals the profound uncertainty underlying current policy approaches and suggests that access to platform data might fundamentally change our understanding of digital information systems. It acknowledges that current policies may be based on incomplete information.


Impact

This insight added a meta-level perspective to the discussion, suggesting that the policy solutions being discussed might themselves need to be reconsidered once better data becomes available. It introduced humility into the policy discussion and emphasized the importance of evidence-based approaches.


Overall assessment

These key comments collectively transformed what could have been a technical discussion about internet access into a multifaceted exploration of digital rights, governance, and social equity. The progression from Alsalmi’s foundational distinction between connectivity and meaningful access, through Ileleji’s grassroots reality check, to Gori’s insights about AI and data transparency, created a comprehensive framework that addressed technical, social, legal, and ethical dimensions. The comments built upon each other to reveal the complexity of meaningful digital access, moving the discussion from simple solutions to nuanced understanding of interconnected challenges. The tension between Ileleji’s strong stance on content regulation and others’ regulatory proposals created productive friction that led to more sophisticated policy thinking. Overall, these interventions elevated the discussion from operational concerns to fundamental questions about digital rights, democratic governance, and global equity in the digital age.


Follow-up questions

What is the legal basis for public service media to provide circumvention tools and VPN explainers to audiences in censored countries?

Speaker

Mr. Patrick Leusch


Explanation

This raises important legal and ethical questions about the mandate and authority of public broadcasters to engage in circumvention activities, which was resolved through consultation with German Bundestag Legal Service citing Article 19 of UN Human Rights Charter


How can we update and modernize Article 19 of the UN Human Rights Charter regarding access to information?

Speaker

Mr. Abdallah Alsalmi


Explanation

Article 19 is described as ‘outdated’ and needs renewal to address modern digital access challenges and internet governance issues


When will the European Commission publish the Delegated Act to make platform data access operational for researchers?

Speaker

Ms. Paula Gori and Thora (PhD researcher)


Explanation

The DSA establishes obligations for platforms to provide data access to researchers, but the operational framework through the Delegated Act is still missing, hindering academic research


How can we establish legal protections for internet access similar to those that exist for shortwave radio and satellite TV?

Speaker

Mr. Abdallah Alsalmi


Explanation

Unlike traditional broadcasting technologies protected by International Communications Union rules, the internet lacks legal protection against government shutdowns and blocking


What are the answers that AI chatbots are giving to users about news and information?

Speaker

Ms. Paula Gori


Explanation

There’s no oversight or knowledge of what information AI systems provide to users, creating a potentially more concerning information gap than private messaging apps


How can smaller academic institutions and civil society organizations be equipped to handle large datasets from platforms?

Speaker

Ms. Paula Gori


Explanation

There’s concern about creating a two-speed system where only well-funded institutions can analyze platform data, particularly affecting Eastern European countries and smaller organizations


How can we strengthen partnerships between international broadcasters and community radio stations in developing countries?

Speaker

Mr. Poncelet Ileleji


Explanation

Community radio stations need digital literacy tools and connections to larger media organizations to provide reliable information and counter disinformation at the grassroots level


What is the EBU pledge to platforms that was recently launched?

Speaker

Giacomo Mazzone


Explanation

A specific initiative by the European Broadcasting Union and newspaper associations directed at platforms was mentioned but not elaborated upon


How can we change the narrative around digital rights and democracy to make it attractive to those who don’t believe in democracy?

Speaker

Ms. Paula Gori


Explanation

There’s a need to reframe conversations about digital access and rights in ways that appeal to broader audiences beyond those already committed to democratic values


What role can municipalities play in strengthening meaningful digital access and combating disinformation?

Speaker

Ms. Paula Gori


Explanation

Local governments may be better positioned than national governments to work directly with citizens on digital access and information quality issues


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Day 0 Event #150 Digital Rights in Partnership Strategies for Impact

Day 0 Event #150 Digital Rights in Partnership Strategies for Impact

Session at a glance

Summary

This discussion focused on digital rights and partnerships, examining strategies for protecting human rights in online environments through cross-sector collaboration. The panel, moderated by Peggy Hicks from the UN Office of the High Commissioner for Human Rights, brought together representatives from civil society, tech companies, multi-stakeholder organizations, and the European Commission to address challenges in safeguarding digital human rights.


Ian Barber from Global Partners Digital highlighted significant challenges facing civil society organizations, including funding crises, capacity issues, and the erosion of multi-stakeholder governance approaches. He emphasized that civil society organizations are struggling to meaningfully engage in policy processes while facing resource constraints and burnout. Jason Pielemeier from the Global Network Initiative discussed how GNI has successfully expanded its membership globally, bringing diverse perspectives from over 100 organizations across all continents to address tech governance challenges collaboratively.


Alex Walden from Google outlined the technical and operational challenges companies face in content moderation, particularly balancing harm prevention with freedom of expression at scale. She emphasized the importance of stakeholder engagement through forums like IGF and organizations like GNI to incorporate civil society feedback into company policies. Esteve Sanz from the European Commission described the EU’s comprehensive approach to digital rights, including the Digital Services Act and efforts to address the gap between international commitments and actual practice regarding digital repression.


The panelists acknowledged the tension between human rights considerations and competing priorities like national security and economic innovation. However, they argued that these concerns are not mutually exclusive and that human rights approaches can actually reinforce security and innovation goals. The discussion concluded with examples of successful collaborative initiatives and emphasized the critical importance of transparency, accountability mechanisms, and continued multi-stakeholder engagement in protecting digital rights globally.


Keypoints

## Major Discussion Points:


– **Challenges facing civil society in digital rights advocacy**: Including funding crises, capacity issues, burnout, and the erosion of multi-stakeholder approaches in governance processes, particularly affecting organizations in the Global South who are already under-resourced.


– **Multi-stakeholder collaboration models and their effectiveness**: Discussion of how organizations like the Global Network Initiative (GNI) work to integrate diverse perspectives from civil society, companies, academics, and investors, with emphasis on expanding global representation beyond North America and Europe.


– **Technical and operational challenges for tech companies**: Balancing the prevention of online harms while respecting human rights, particularly freedom of expression, dealing with issues of scale, speed of content moderation, and navigating complex regulatory environments across different jurisdictions.


– **International cooperation and regulatory frameworks**: The European Union’s approach to digital rights through legislation like the Digital Services Act, the gap between diplomatic commitments and real-world implementation of digital rights protections, and the role of international processes like WSIS+20.


– **Accountability mechanisms and transparency in digital rights partnerships**: Discussion of how to ensure accountability in cross-sector partnerships, particularly when working in the Global South, including the need for transparency, ongoing engagement, and effective watchdog functions.


## Overall Purpose:


The discussion aimed to foster cross-sector collaboration between civil society, tech companies, governments, and international organizations to strengthen human rights protection in online environments. The panel sought to identify challenges, share good practices, and explore strategies for more effective partnerships in addressing digital rights issues globally.


## Overall Tone:


The discussion maintained a professional and collaborative tone throughout, though it acknowledged serious challenges in the field. While panelists expressed concerns about “digital depression” and the gap between commitments and reality, the conversation remained constructive and solution-oriented. There was a notable effort to balance realism about current difficulties with optimism about the potential for meaningful collaboration and the continued importance of defending digital rights. The tone became slightly more hopeful toward the end as panelists shared specific examples of successful initiatives and partnerships.


Speakers

**Speakers from the provided list:**


– **Peggy Hicks** – Works with the Office of the High Commissioner for Human Rights in Geneva


– **Alex Walden** – Global Policy Lead for Human Rights and Freedom of Expression at Google


– **Ian Barber** – Legal and Advocacy Lead at Global Partners Digital


– **Esteve Sanz** – Head of Sector for Internet Governance and Multi-Stakeholder Dialogue at the European Commission


– **Jason Pielemeier** – Executive Director of the Global Network Initiative


– **Audience** – Alejandro from Access Now (asked a question during the Q&A session)


**Additional speakers:**


None identified beyond those in the provided speakers names list.


Full session report

# Digital Rights and Partnerships: A Comprehensive Discussion Summary


## Introduction and Context


This panel discussion, moderated by Peggy Hicks from the UN Office of the High Commissioner for Human Rights in Geneva, brought together key stakeholders to examine strategies for protecting human rights in online environments through cross-sector collaboration. The conversation featured Ian Barber from Global Partners Digital, Jason Pielemeier from the Global Network Initiative, Alex Walden from Google (joining from Oslo), and Esteve Sanz from the European Commission.


Peggy Hicks opened by highlighting OHCHR’s recent work in digital rights, including a Brazil judiciary event and a MENA region study examining how digital technology affects human rights defenders. She noted Norway’s resolution calling for assessment of risks faced by human rights defenders through digital technology, setting the stage for a discussion on practical strategies for strengthening partnerships across sectors whilst addressing systemic challenges threatening effective digital rights advocacy.


The discussion took place against a backdrop of increasing digital repression worldwide, funding challenges for civil society organisations, and growing tensions between human rights considerations and competing priorities such as national security and economic innovation.


## The False Dichotomy: Human Rights vs. Security and Innovation


A central theme throughout the discussion was challenging the perceived tension between human rights considerations and other priorities. Alex Walden from Google articulated this position most clearly, arguing that “in order to achieve national security interests, in order to focus on ongoing innovation and have competition in the market, we have to ensure that human rights is integrated across those conversations and remains a priority… we have to do all of them at the same time.”


Ian Barber supported this perspective, arguing that human rights approaches and security outcomes can be mutually reinforcing rather than opposing concepts. This reframing challenges the prevailing narrative that positions human rights as an obstacle to security or innovation, and offers a strategic approach to addressing the funding crisis facing civil society organisations.


Esteve Sanz demonstrated this integrated approach through the EU’s legislative process for the Digital Services Act, which he described as “complex, almost miraculous” in successfully balancing multiple concerns using the Charter of Fundamental Rights as a framework. This example provided concrete evidence that comprehensive regulatory approaches can address multiple priorities simultaneously without sacrificing fundamental rights protections.


## Civil Society Challenges and the Funding Crisis


Ian Barber presented a sobering assessment of the challenges facing civil society organisations in digital rights advocacy. He identified what he termed a “narrative crisis,” where funding has increasingly shifted away from human rights approaches towards national security and economic impact priorities. This shift has created significant capacity issues for civil society organisations, leading to layoffs, burnout, and insufficient expertise to participate effectively in policy forums.


Barber emphasised that these challenges are particularly acute for organisations in the Global South, which were already under-resourced and now face even greater difficulties in meaningfully engaging with policy processes. The proliferation of forums and processes has created an additional burden, making it difficult for under-resourced organisations to keep up and participate meaningfully across multiple venues.


The civil society representative argued that effective collaboration requires moving beyond tokenistic engagement to genuine power-sharing arrangements. He stressed that “the most impactful forms [of collaboration] are going to be those that truly shift power and resources back to civil society,” challenging other panellists to consider concrete accountability mechanisms rather than remaining at the level of aspirational statements about partnership.


## Multi-Stakeholder Collaboration Models and Global Engagement


Jason Pielemeier from the Global Network Initiative provided a contrasting perspective, highlighting successful examples of expanding multi-stakeholder engagement globally. He described GNI’s intentional growth from its original North American and European focus to encompass over 100 members across four constituencies (companies, civil society organisations, academics, and investors) representing all continents.


Pielemeier acknowledged the emotional toll of working in digital rights, coining the term “digital depression” to complement discussions of digital repression. Despite these challenges, he maintained an optimistic perspective, arguing that “the Internet is still an incredibly vibrant and critical space, especially when you compare it to offline mediums for free expression and freedom of association and assembly.”


He also noted the misappropriation of language in policy discussions, specifically mentioning how the phrase “fork in the road” was being used inappropriately in some contexts. The GNI representative emphasised the importance of creating concrete forums that bring stakeholders together around specific, tangible issues rather than abstract discussions.


Importantly, Pielemeier highlighted ongoing collaboration between GNI and Global Partners Digital on WSIS engagement, including workshops in nine countries to involve wider stakeholders in World Summit on the Information Society input processes, demonstrating practical approaches to global engagement.


## Technology Company Perspectives and Operational Challenges


Alex Walden outlined the complex technical and operational challenges that technology companies face in balancing harm prevention with human rights protection. She emphasised the particular difficulty of respecting freedom of expression, privacy, and non-discrimination whilst preventing online harms at the scale and speed required by modern digital platforms.


Walden highlighted the challenge of content moderation, which increasingly requires AI assistance whilst maintaining human oversight for context-sensitive content. She stressed the importance of regulatory safe harbours that enable effective content moderation and policy iteration, noting that the complex regulatory environment across different jurisdictions creates significant operational challenges.


The Google representative described regional stakeholder meetings designed to ensure that feedback reaches both policy-drafting and product-building teams within companies. She specifically mentioned the Rights and Risk Forum held in Brussels the previous month as an example of creating transparent conversations between stakeholders using concrete regulatory artefacts, demonstrating practical approaches to multi-stakeholder engagement around specific policy challenges.


## European Union Regulatory Approaches and Global Diplomacy


Esteve Sanz from the European Commission provided insights into the EU’s comprehensive approach to digital rights protection through both legislative frameworks and international diplomacy. He described the EU’s focus on securing global agreements such as the Global Digital Compact and the Declaration for the Future of the Internet to commit states to respect digital rights internationally.


However, Sanz also presented a sobering assessment of current trends, noting that “we are in a new stage where the Internet is not only controlled, but it’s used for control, and what we see is a very depressing trajectory.” He identified a concerning gap between diplomatic achievements in securing commitments from powerful global actors and the reality of increasing digital repression on the ground.


Sanz mentioned an April conference on “governance of Web 4.0” that resulted in important principles, and described the EU’s public diplomacy efforts, including calling out internet shutdowns and funding projects like protectdefenders.eu to provide urgent support for human rights defenders.


The European Commission representative highlighted the Digital Services Act as a model for balancing human rights considerations with regulatory requirements, specifically noting its application to Very Large Online Platforms and Very Large Online Search Services. He emphasised the upcoming WSIS Plus 20 review as a critical opportunity, describing it as a “fork in the road” for determining future internet governance directions.


## Accountability Mechanisms and Transparency


The discussion of accountability mechanisms was significantly shaped by a question from Alejandro of Access Now, who asked about accountability mechanisms for partnerships, especially when working in the Global South where it’s easy for Global North actors to disengage. This question forced panellists to move beyond aspirational statements to concrete mechanisms for ensuring sustained commitment.


The panellists identified several approaches to accountability, though they acknowledged that current mechanisms remain insufficient. Pielemeier described GNI’s independent assessment process for companies, which includes detailed reviews of internal systems and policies, whilst noting that similar accountability mechanisms for state actors remain limited to “naming and shaming” through bodies like the UN Office of the High Commissioner for Human Rights.


Barber emphasised civil society’s watchdog role in bringing issues to light through transparency and ongoing iterative processes, arguing that transparency is fundamental to meaningful accountability. Walden pointed to the Digital Services Act as a beginning model for public risk assessments that provide accountability in regulatory settings, though she noted that the effectiveness of these new tools remains to be evaluated.


The discussion revealed consensus that transparency is fundamental to accountability across all sectors, whether through public reporting, independent assessments, or open dialogue. However, the panellists acknowledged that more tangible legal processes and structural supports are needed to ensure accountability, particularly in international partnerships involving Global South organisations.


## Global Engagement Challenges and Resource Distribution


The discussion highlighted significant challenges in ensuring truly global engagement in digital rights protection, particularly in meaningfully including voices from the Global South. Barber described the coordination challenges facing under-resourced organisations, whilst Pielemeier outlined the intentional work required to build global membership and ensure diverse perspectives in governance processes.


The panellists acknowledged that the proliferation of forums and processes, whilst potentially offering more opportunities for engagement, can create overwhelming burdens for under-resourced organisations. This creates a paradox where efforts to increase inclusivity may inadvertently exclude those with the greatest resource constraints.


The discussion revealed ongoing questions about how to scale multi-stakeholder approaches and ensure they reach beyond well-resourced organisations based in major policy centres, with no clear solutions emerging for addressing these structural challenges.


## Future Directions and Critical Junctures


The panellists identified the WSIS Plus 20 review as a critical juncture for determining future directions in internet governance, with the potential to either build on multi-stakeholder human rights values or move in directions that could further marginalise civil society voices. This process represents both an opportunity and a risk for the digital rights community.


Several concrete initiatives were highlighted as ongoing work, including the Global Digital Rights Coalition’s coordination efforts, continued Rights and Risk Forums for discussing regulatory implementation, and various EU diplomatic initiatives. However, the panellists acknowledged that these efforts, whilst valuable, remain insufficient to address the scale of challenges facing digital rights protection globally.


The discussion revealed particular concern about maintaining the internet’s role as a space for freedom whilst addressing legitimate security and innovation concerns. This balance requires continued collaboration across sectors, but the structural challenges facing such collaboration—including funding constraints, capacity limitations, and power imbalances—remain largely unresolved.


## Conclusion


This discussion revealed both the complexity of challenges facing digital rights protection and the potential for meaningful collaboration across sectors when properly structured and resourced. The panellists demonstrated strong consensus on core principles, including the importance of transparency, the need for genuine rather than tokenistic multi-stakeholder engagement, and the concerning gap between international commitments and actual implementation of digital rights protections.


However, the conversation also highlighted significant structural challenges that threaten the sustainability of current approaches, particularly the funding crisis facing civil society organisations and the erosion of inclusive governance mechanisms. The panellists’ emphasis on moving beyond symbolic engagement to genuine power-sharing arrangements reflects a mature understanding of what effective collaboration requires, even as they acknowledged the difficulty of achieving such arrangements in practice.


The discussion balanced realism about current challenges with constructive approaches for moving forward, rejecting the false dichotomy between human rights and other priorities in favour of integrated approaches that treat these concerns as mutually reinforcing. This framework offers promise for future collaboration, though significant work remains to translate it into sustainable practice that addresses the structural inequalities and resource constraints that currently limit effective global engagement in digital rights protection.


Session transcript

Peggy Hicks: Thanks everybody. Please take seats. I’m hoping that you can all hear me through your microphones or through the headsets. It’s all good. Wonderful. You’ll see we’re missing one of our panelists, but we decided to start off because we want to have as much time as possible for you all to hear from our wonderful panelists today and then also to have a chance to open up for your questions and comments as well. This event is focusing on digital rights and partnerships, strategies for impact, and we’re really looking today to have a really open conversation about the intersection of online experiences and fundamental human rights. We want to highlight the challenges that are faced by civil society, tech companies, and enforcement agencies in protecting these rights within what we all know is a complex and borderless online environment. We need to recognize, of course, that each of us come to this issue from a different place, that states are the ones that have the legal obligations to take action, that companies have a duty to respect human rights under the UN guiding principles on business and human rights, and civil society, of course, is there to keep everybody else honest on both of those obligations, one hopes. So we are going to have a chance to talk a little bit about some of the collaborative projects that are going on in this place, some of the good practices that are happening, and obviously the idea today is to really foster more cross-sector collaboration to strengthen human rights protection and online environments so that we all have a better sense of what we’ve learned, what we’re currently doing, and what we can do more. My name’s Peggy Hicks. I work with the Office of the High Commissioner for Human Rights in Geneva, and we, too, have been working in this space and trying to figure out what we can contribute. We had a recent event, for example, in Brazil, working with the judiciary on social media regulation. We’ve done a study within the MENA region, focusing on the experiences. And in particular, the idea within that document that we’re looking for a smart mix of mandatory measures and policy incentives that states can put in place that means that they’ll meet not only their obligations to respect human rights, but that they are doing what they need to regulate the space so that companies are also contributing to a more human rights protective environment. We have a project that we’ll probably hear a little bit about during this call, looking at the, which we call the BTEC project, that encourages cross-stakeholder, cross-sector multi-stakeholder engagement, and it’s focused on really trying to work with companies to answer some of the tough questions we see, including around AI and content moderation. And it remains a challenge to figure out how to do this, especially since we’re working with some of the largest companies. One of the big questions is how we make this experience more global, how we engage more with small and medium enterprises. And right now, for example, we have a track that’s focusing on how do we deal with investors within the tech space as well. We have found though, through our discussion with the companies, that the work together with them actually has strengthened the way that they work amongst each other, but also that we learn and have a privileged position through what we learn to be able to bring some of what’s happening within the companies to a more general audience, for which we’re very grateful. We’re also, of course, working with the International Institutions in this space including the UN Human Rights Council and some of the things that come out of that body and I’ll do a shout out now to our host of Norway for a resolution that they passed recently which is important that for example calls on us to assess the risks faced by human rights defenders through digital technology and do work on that issue. So we’re working across these different platforms with our trusted partners to try to have this type of conversation that we’re having today and we’re looking forward to doing it in more depth with you and in order to do that I’m super privileged to have with us a wonderful panel. I guarantee that the panel will not be composed only of men when Alex Walden from Google arrives soon so if anybody’s taking screenshots hold up we’ve got Alex coming soon so we’ll be a bit more balanced but today we’re very fortunate to have with us and I’ll present to you briefly now Jason Pielemeier who we’ve worked with very closely as the Executive Director of the Global Network Initiative and Barber to my right is the Legal and Advocacy Lead at Global Partners Digital and Esteve Sanz is the Head of Sector for Internet Governance and Multi-Stakeholder Dialogue at the European Commission. So you’ll see they come from very different perspectives as well I think so it’ll be really great to have their different contributions. Alex Walden who will join us as I mentioned is the Global Policy Lead for Human Rights and Freedom of Expression at Google. So we’re going to jump right into the conversation and I’m going to turn to you Len Manriquez first and ask you from the perspective of civil society what specific challenges do civil society organizations face in advocating for and protecting online human rights when confronted with these pressing issues. Please.


Ian Barber: Sure thanks Peggy. Good morning I hope you guys can all hear me okay. To answer your question there are a number of challenges that civil society is facing right now especially in the past few years it seems are advocating or pursuing for things along the lines of national security or kind of an economic impact, kind of looking for impact investment. So it’s reflecting kind of a narrative crisis I think I believe for the human rights based approach that needs a kind of a bit of a rethink at this point for us. And this has kind of impacts not just on civil society in the global north but particularly civil society in the global majority which are already you know less well resourced and able to make an impact so I think that’s critical to acknowledge. And this leads to I think some serious capacity issues so of course with lack of funding there’s less of an ability for civil society across the globe to be able to make an impact. We’re seeing this is resulting in layoffs you know burnout and also not having the expertise then to be able to come into these forums and spaces and be able to effectively advocate. We know that there’s been a proliferation of forums and processes in the past few years. It’s quite difficult to keep up with even kind of the standard ones we’ve had around for a while. One’s based in Geneva, UPR focused, one’s treaty based but now you know the UN Cybercrime Convention, the AHC, we have WSIS, we have AI governance efforts at this point. So keeping on top of all these things to be able to have them well resourced with your team is quite difficult and I think those are kind of the central things we’re seeing. And then another big one is also this general erosion I think or challenging of this multi-stakeholder approach to governance or policymaking. So whether it’s at the national level or the regional level or the global level. their CSOs are kind of not able to meaningfully engage and be a part of the decision-making process or be able to input and there’s kind of a lack or a closing of mechanisms that are inclusive and transparent for civil society to be able to engage and this is problematic because we’re seeing then these state-led processes or an increasingly tendency toward states-led processes that then don’t then include the expertise and the advocacy points of civil society, including those that are most impacted, including those that are on the ground and have the knowledge that’s needed to make effective decisions and frameworks. So I think that’s kind of a high-level point. It could go on for a long time, but I think I’ll stop there.


Peggy Hicks: Great. No, I think you’ve hit on many of the points that we’re going to dive into deeper during the conversation and you know, I want to say just on that last point you made, this idea that when civil society isn’t able to put their input, I really want to emphasize that that’s not just a disadvantage to civil society who wants to have their voice heard, but to the process itself and it and it itself is weakened by the lack of the expertise that civil society, a real experience that civil society can bring in. You’ve hit on some of the things that I think everybody is going to want to come back to eventually as well on the the main challenges that we see in the space, which unfortunately are shared I’m sure by all of us on the panel and many of you in the audience as well. But I’ll turn now to Jason and obviously for those that don’t know the Global Network Initiative, although I think most people at IGF do, it represents a unique coalition of civil society, academic, investor, and private stake sector stakeholders. And we’d like to hear more, Jason, about how GNI ensures that diverse perspectives and priorities from all these members are effectively integrated into your strategies for online and human rights protection and maybe give us a concrete example of successful collaborative efforts that you’ve engaged in. Thanks. Yeah, and welcome to Alex who’s joining us. Already introduced you, Alex, so you’re you’re with us.


Jason Pielemeier: Thanks so much, Peggy. It’s a pleasure to be here, I’m really glad to be a part of this panel, to be here in Norway, to be back at the IGF. So, hi to everyone in the audience, both in person and virtually. So, I appreciate the opportunity to share a bit more about the Global Network Initiative, GNI, and how we work, and how we try to create space for and amplify the voices in particular of a really diverse range of stakeholders. As people may know, GNI is a multi-stakeholder organization, so our membership falls into four categories. We call them constituencies, so we have academic members, we have companies, including Google, we have civil society organizations, including Global Partners Digital, and we have investors as members. So, it’s a very big tent, but it wasn’t always that way. When GNI started about 17 years ago, it was a relatively small set of mostly North American and some European organizations. But today, we have over 100 members from every populated continent, and we’ve really made some significant strides to put the global in Global Network Initiative. And that’s been very intentional. We’ve worked really hard over the last decade to reach out to organizations of all types in all kinds of different regions to be very conscious of the issues that we focus on, the spaces that we curate, the events that we attend, in order to really demonstrate our desire to be a part of a truly global conversation and to bring a diverse range of voices into those conversations. So, it hasn’t been straightforward or necessarily easy. to to grow the network the way we have but we’ve been we think quite successful and really appreciate the sort of range of intelligence and viewpoints and experiences that new members have brought into GNI and so that’s you know that’s really part of what we are about is trying to you know build this this space this trusted coalition of organizations that can come together and address difficult challenges in the in the tech governance realm and we we bring work bring our members together in various ways we we do learning sessions we we have a bespoke accountability process for our companies and we’ve made efforts to expand the opportunities for members from across the world to participate in those assessments that we conduct we also try and go out into the world attend other events like the IGF but also regional forums like the forum up for internet freedom in Africa the digital rights and inclusion forum regional IGFs all over the world and hold sessions with our with our members and with other stakeholders and partners in those settings as well in terms of an example I mean I think I guess you know one example of how we’ve grown the network in a way that I think hopefully is having impacts in jurisdictions outside of North America and Europe is the work that we did to bring MTN the South African telecommunications company into GNI MTN has been on a journey for several years now and I think it has worked with a range of actors including I think the BTEC project to understand their responsibilities under the UN guiding principles and other frameworks and to really build out their own approach to human rights. So they’ve developed a really robust human rights statement. They joined GNI in 2022. Their transparency report has gotten much deeper and much more detailed. I encourage folks to take a look at that as an example of a really good technology company transparency report. And they are now going through their first GNI assessment. And that has created a lot of opportunity for them to kind of look inward at their systems and policies and understand better the risks related to their business operations, the jurisdictions that they’re operating in, and to get important feedback from a wide range of stakeholders through GNI. So I’ll stop there. I’m happy to talk more about any of that as we go through the rest of the panel.


Peggy Hicks: Great. Thanks, Jason. And it’s really good to hear about the growth and the way that you’ve been able to do it. I think Len Manriquez had already raised the difficulty and sometimes there’s a commitment to a multi-stakeholder approach, but actually bringing everybody into the room is one of the challenges and doing it in a meaningful way. So your experience in doing that is really good to hear about. I think we’ll need to come back a bit more on some of the challenges, including in terms of some of the disincentives for companies to do it. But we’re going to actually turn to Alex now who’s got very direct experience with, you know, these challenges that companies face in navigating the space. So Alex, if we could hear from you a bit about the significant technical or operational challenges that Google faces in mitigating online harms while simultaneously respecting freedom of expression, including in response to national context and government requests. And after that, you get two questions. The second is how you’re also working to to incorporate feedback from civil society organizations and human rights experts into your policies and practices. Thanks, Alex.


Alex Walden: Thank you. Thanks for the question. And thanks for bearing with me on my travel from Oslo to Lillstrom. It’s a good question and I appreciate the framing because really the challenge is about how do you do, how do you prevent online harms while you are respecting human rights, in particular freedom of expression, privacy, and non-discrimination. So just to censor is not what’s difficult. What’s difficult is to ensure that you’re respecting rights while you are trying to take a tailored approach to removing content that is harmful. So in particular, the two things I want to flag are one, sort of the speed and scale. That is sort of a policy challenge and it’s also obviously a very kind of operational challenge. The amount of content that we have being uploaded to our products every day means that the volume is high and we need to figure out ways to address that at scale. And so we’re using, obviously there are human moderators that participate in that process, especially for content where that requires human kind of context to understand. But we use AI and we’re increasingly using AI to help us do that faster. So again, scale is always, and you’ll hear all the companies say this, scale is a challenge. And so figuring out how to address that scale in a responsible way remains an ongoing challenge that we are always sort of iterating on how to do better. The other piece is really the complex regulatory environment, which means that a few things. One, we need safe harbors in order to do this work effectively to make sure that we are able to implement content moderation practices that are effective and kind of iterate on our policies. And so one is safe harbors and ensuring that we have regulatory. work? In terms of how we engage stakeholders and take feedback, there’s a few things I’d say. On the largest scale, it’s important for companies to show up to venues where our stakeholders are so that we can participate in conversations with them and make sure that we’re hearing from them in that context. So things like IGF, venues like RightsCon, showing up there and being part of the conversation and hearing what the concerns are from stakeholders, those sort of being present is an important thing for us to do at those large venues. Then it’s about being part of organizations where sort of more curated versions of that conversation is taking place. So GNI is an important, being a member of GNI and engaging in GNI is a really important way in which we do that as Google, a place where we have core stakeholders that are talking about these issues and the trade-offs all the time. And then specifically, just Google as an individual company, we have programs in place, part of the human rights program as well as along with our trust and safety colleagues ensuring that we are doing regional stakeholder meetings and stakeholder meetings with our sort of global colleagues as well to make sure that we’re hearing directly from experts in the field about what’s happening in their region, what their experience is with our products, how things are working or are not working and ensuring that that feedback is going directly to the teams that are drafting our policies, enforcing our policies and building our products.


Peggy Hicks: Great, thanks very much Alex. I mean I think this area of stakeholder engagement and what works and what doesn’t is one of those that we have to keep iterating and improving on. We did a BTEC paper on this that people might want to refer to with sort of the five key principles but one of the things we found talking to all of you is that there are good practices and there are ways to improve and that I think there’s still a lot of work to be done. But we need to move over and I’m really glad to have with us another perspective coming from the European Commission, SFS. We’d really like to hear about sort of how does international cooperation play in the European Commission strategy to protect online human rights especially with countries and regions outside the EU and how that might contribute to the WSIS plus 20 process and how you’re looking at the EU’s role in this important space. Thanks.


Esteve Sanz: Thank you so much Peggy. I am very glad to be in this panel, the European voice in the panel. We, digital and human rights are an absolute priority for the EU. We’ve been working on it for a long long time. We have focused especially on getting agreements at the global level including the global digital compact, the declaration for the future of the internet etc that really commit states and critical actors to respect the digital human rights, not censor the internet, not doing internet shutdowns etc. a very important achievement that we did in the Global Digital Compact that commits states in the UN not to shut down the Internet. At the same time there is a gap here. We have done a lot of analysis also, engaged academics and civil society to help us understand what’s going on in the ground when it comes to states using the Internet for control. I think that we are in a new stage where the Internet is not only controlled, but it’s used for control, and what we see is a very depressing trajectory. So there is this gap that is very puzzling between the diplomatic achievements that we have managed to do in committing global actors, very powerful global actors, to respect fundamental freedoms online and what’s going on in reality. So this is very damaging, this is a diagnostic that we have on the table. We have engaged in several funding exercises, we have the Global Initiative for the Future of the Internet that has a project that we call Internet Accountability Compass that will help us precisely analyze this gap, what we are committing into and what’s really going on in terms of digital repression. This is extremely important for us. Every time that we engage on human rights and digital dialogues with countries, we bring up digital depression, that’s very important for us. When there is a big event, an Internet shutdown, we engage in public diplomacy as well, in Iran, in Jordan, so we have callouts for Internet shutdowns there. And there is a lot of investment as well, so we have, for example, projects like protectdefenders.eu which provides funding in case of urgent need for journalists and other civil society actors. We work a lot with you, Peggy, so we have a lot of funding and projects in common, one on Internet shutdowns, several funding projects that really aim at empowering OHCHR. to play a critical role in this in this field and so yes this is all going on we are very much aware of the funding situation there are a lot of internal discussions within the EU how we can step up our role in that area because we feel that this is it’s going to be really dramatic if we don’t act soon of course discussions related to funding are always extremely delicate in any public administration and it’s not difficult but I can assure you that we have achieved some successes already and more funding will be flowing whether the EU can cover all the funding that it’s being extracted from from those organizations it’s of course an open question but it has really sent a signal that the EU should step up I would say so on the wishes on the wishes plus 20 review this is I mean this is very important it links with what I was explaining at the beginning that we have actually achieved a lot of things when it comes to UN discussions about states committing to defend digital rights etc but then when we see it’s a bit puzzling wishes plus 20 review will double up on those efforts so what EU member states have have discussed and this is how we will go to the negotiations of the outcome document is to really take stock of the rise of digital authoritarianism this has been presented by our ambassador to the to the UN already so acknowledging the digital authoritarianism is on the rise and this has we has to be acknowledged and then based on that propose what we aim what we what we hope will be unprecedented language at the UN level in the wishes plus 20 resolution on digital human rights so this language it’s a still object of discussions internal discussions will probably publish an on paper with that language that again we hope it’s not part of any UN resolution because the challenges are so high that we need to move up. Part of that language will be for sure going much more concretely into statements that protect journalists, civil society, etc. from digital repression. But that’s our aim, it’s a public aim, it’s very ambitious, it’s very difficult to achieve and pull off, but we of course count on like-minded partners and stakeholders who will need to be participating very intensively in the WSIS plus 20 resolution to do that. We think that the context is really the good one, so that we can achieve at least that. But again, the reality might be different than whatever the outcome document of WSIS declares, so important to bear in mind that gap.


Peggy Hicks: Oh thank you Esteve. It’s really interesting to hear your comments about the disconnect between where we get to in terms of international commitments and what we see in the world, and I think we feel that on the financial side as well, where the demand for action, for work in this area just grows exponentially, but we are facing some of the challenges that you mentioned. I want to look back just quickly to Len Manriquez, and then I’m going to have a question for everybody, and then we’re coming directly to you all quickly. Len Manriquez, I wanted to ask you, you know, when you look at collaboration from a civil society perspective, what is civil society looking for? What does it need from governments, tech companies, and other stakeholders in order to advance human rights protection? Where have you seen good collaboration happening?


Ian Barber: Great. Well, I just want to say that there’s a lot of great collaboration already on the table from these individuals and from their remarks, so I just want to acknowledge that. But also, I think at the end of the day, the most impactful forms are going to be those that truly shift power and resources back to civil society and allow them to engage. So from governance, we’ve already alluded to this, it’s ensuring that those policy processes, whether it’s national, regional, or global, that they’re actually inclusive, they’re bringing in those voices, that there’s input that’s received, there’s acknowledgement, and there’s a feedback loop as well. So that’s key. I think also funding we’ve hit on already is a key metric, and also kind of recommitting to human rights obligations themselves, of course, when things do happen. from companies I think that you know they can really operationalize their their commitments through transparency and access that can come in a variety of forms and come to access to data it can be on their impact assessments could be enforcement practices also can be this kind of iterative multi-stakeholder engagement with you know groups that are in different regions that are more at risk those are going to be key as well and I think they kind of lead to this co-design and co-development of policy and governance and frameworks that we want to see and I think for more multi-stakeholder coalitions like GNI and again these things are already very much being done is there’s there’s definitely a collaboration deficit that I’m seeing so there’s a recognition that we have challenges but really there’s not always structural support then to address them so what you need to do is then champion equity in partnerships as Jason was alluded to it’s bringing in voices from the global majority of the Global South and civil society as co-leaders better than engaging advocacy setting not just you know kind of tokenism it’s facilitating access to knowledge and sharing that so that engagement can be effective and realized and it also there’s a need to do this to kind of like build a gap of trust I think among stakeholders across different areas because without it we’re going to have a situation where the structures and don’t support everyone and there’s no final effective impact and it’s again this symbolic means of doing things so I think that’s kind of a cross response there but yeah yeah.


Peggy Hicks: That’s great and I think it’s really important to make that point that it’s it’s got to be intentional you have to put the resources and effort into it if you’re going to really make things work in a more global way like like Jason talked about with with GNI so before I turn to the audience I do want to ask one sort of lightning round question of all of you because you started off the end by noting that we’re sort of we’re navigating this human rights field in the midst of of two really oppressive almost pressures from both the the securitization side where all that matters is is you know the cyber crime convention as we showed, you know, looking and David Kaye was just talking about how we make exceptions for anything that may be, you know, relevant from the national security side. And then I think even more prevalent now is this rationale around the competition, innovation, economic side where anything that stands in the way and human rights are sometimes seen as obstacles or barriers to come over means that companies and other stakeholders, including governments seem somewhat less invested in answering some of the questions we’re asking today than they have been for me at least at prior IGFs. So I wondered how you’re looking at that and when you get that type of pressure that, you know, why should we focus on doing it the multi-stakeholder way and bringing in civil society and why does it matter to make sure that we’re building in human rights within the digital tech work that we do given that we have these competing tensions around national security and the need for greater competition and and effective innovation. You know, give me, you know, your 30-second answer to that that you use, which I’m sure comes up quite frequently in everybody’s lines of work. So maybe just to go this way, start with you, Alex.


Alex Walden: Never comes up for me. No, no, never. I think, you know, you just hit on these things that are sort of part of my internal and external conversations every day. From my perspective and what I say to my colleagues inside the company and my stakeholders outside is we have to be able to focus on, we have to figure out how to and to focus on all these things at the same time. In order to achieve national security interests, in order to focus on ongoing innovation and have competition in the market, we have to ensure that human rights is integrated across those conversations and remains a priority. States have a duty to uphold their obligations to human rights and so it is imperative that they in those conversations about regulation, about how they use AI as part of their public sector, ensure that they’re upholding sort of that obligation. And companies also have a duty to do that too. But I think sort of there’s a role for everyone and it is imperative that governments do it first in order to sort of set the stage for all of the other actors to be able to show up and do their part. Companies are providing technology to governments for national security purposes. And we need to know that governments are thinking about their human rights obligations in the context of when they’re procuring that. So I think there’s a lot of good guidance out there. BTEC has done some of it already in thinking about procurement and how companies should be thinking about their human rights obligations. But really like we have to do all of them at the same time.


Peggy Hicks: Great, thanks Alex. Ian?


Ian Barber: Yeah, I think just building on what Alex said for me when I’m speaking to governments or any other stakeholder, I kind of challenge them to say that, actually I don’t think human rights approaches and outcomes and security or whatnot are even potentially even opposing things. They can be very much mutually reinforcing concepts and they can support one another. So to kind of fold them in is kind of a creative way sometimes to Trojan horse to get kind of this funding, which is essential. And I think that really what it comes down to is then as well as that you do need as a final point, civil society in the room to bring that expertise, to bring the knowledge and the know-how to be able to arrive at these solutions. So it’s kind of challenging and rejigging the narrative and then also ensuring that those people are at the table.


Peggy Hicks: Great, thanks. Jason?


Jason Pielemeier: Yeah, I mean, I guess two things. One, taking a step back, I just had sort of an interesting kind of mental moment because when you were talking, you said digital repression and I heard digital depression. And I think that’s because of the comments that we heard initially from Ian and just generally how a lot of us are feeling these days, which I want to acknowledge is real. So we’re dealing with both digital repression and digital depression. But I think it’s really important to remind ourselves. and the Internet is still an incredibly vibrant and critical space, especially when you compare it to offline mediums for free expression and freedom of association and assembly. And that’s something we sometimes forget. We can look at the annual Freedom in the Net reports, which are excellent, and see this trend towards declining freedom. And it’s real. And we have to acknowledge it. But if you compare offline and online realities for people in even and maybe especially the most repressed places on earth, there’s a real reason why they cling to the social media spaces, the open Internet that they are able to access, whether it’s finding cracks through the repressive laws in their country or using anti-censorship technologies to get access to the open Internet. And we don’t have to look far and just look at the example of Iran today to see that reality. So I want to just kind of infuse that sort of optimism or hope that, you know, there is still something worth fighting for. There’s a reason why it’s important to have these important statements from governments, even if they’re not always living up to them in practice. There’s a reason why we continue to get together in these multistakeholder settings to talk about what we can do, even if it’s easier sometimes to sort of give in to cynicism and digital depression. So not an answer to your question, but something that I feel like we needed to kind of just remind ourselves of.


Peggy Hicks: Very helpful. Esteve.


Esteve Sanz: Every time that there is a legislation that deals with digital in the EU, we strive, of course, to find the right balance between security, between all these elements. The legislative process in the EU, it’s a complex one. The parliament is involved, civil society, there are a lot of consultations, a council, the commission, there is a proposal. So it’s a very complex, almost miraculous way of doing legislation. that yields something like the Digital Services Act which is perhaps the cornerstone of our digital regulation right now, that as you well know it’s a null of society approach, that’s what we call it. The legislation itself has pieces that are aimed at involving civil society into the process of governance of the platforms themselves, there are transparency provisions, users can complain about takedowns of content etc. So this, the balance that we found in the legislative process when it comes to the Digital Services Act, we think it’s extremely valuable, of course we are pitching it to our partners globally bearing in mind that each region, each country has its own approach, but so far I think that we have managed to find that approach, that is something very important for us in the EU legislative system which is the Charter of Fundamental Rights. So whatever legislation we put on the table, whatever proposal it’s on the table, it needs to comply with the Charter and having that Charter as the ultimate element that frames everything that we do in the EU but especially on digital has been very valuable because in the end it shows us a path towards finding that balance correctly.


Peggy Hicks: Wonderful, thanks so much. So I’m gonna jump quickly now to our audience to see if any of you have any, we’ve provoked any thoughts from any of you that you’d like to put on the table or any questions for our panel here. I’m not exactly sure how the tech here works, it looks like there’s microphones alongside, I think you probably need to go to one of those, if anybody will give me a thumbs up that that’s how we’re supposed to do it. Yes, okay I see movement, looking forward to hearing the comment of the gentleman, nope he’s just leaving, bye. Anybody want to come in? Trust me we can keep the conversation going amongst ourselves, I know these guys but happy to hear from you. I know it’s a little awkward to have to get out of your chairs. All right, I’ll come back to you all. So I think Jason did something good, which is, I think it is one of those spaces where it’s important for us to look for good examples and to put ideas on the table that we think are things that we want to see replicated. So if you had to just give me an idea of an incentive or something that you think you want to see more of that you’ve seen, you know, either in a particular context in which you’ve worked, give me some good examples that we can leave our audience with today. Alex, can I start with you?


Alex Walden: Yeah, I mean, I think, well, one thing I’ll flag just because maybe it’s top of mind and recent, and because it hits on some of the DSA things too, is I think we, GNI and DTSP, who’s another organization that works with companies around risk assessment and harms issues, convened a risks and rights forum in Brussels last month. And that was an opportunity for all of the companies who are members of GNI and DTSP, who are also VLOPs and VLOSSs under the DSA, to come together and have conversations about the assessments that are now public and all the information that’s in that. And so to really kind of, we have a lot of actual sort of artifacts that we can discuss and talk about the challenges and what people want to see more of from companies. And so I think FORA, where we have a lot of sort of material that we can talk through and have really like open, transparent conversation between civil society and companies, it’s like, that was a really, I think it’s a really excellent example of how we can kind of, we have a piece of regulation where it’s in action and talking amongst the stakeholders about what’s working, what’s not, and how we can improve. So that’s just a recent one that I think is really pressing, especially for companies in particular.


Peggy Hicks: Great. No, I think that’s really an important point, Alex. And to me, it also gives rise to something that I often think. in this space, which is that evidence base, that idea of going beyond the general conversation to really talk about some specific case studies, something went wrong, putting what went wrong on the table sometimes and unpacking it and figuring out how to do better is really important. And I know within our work where we do peer review amongst companies similarly situated, we have some really, really frank and useful conversations that can push things forward. But you can’t do that if you stay at the, you know, 10,000 feet level. Ian?


Ian Barber: Yeah, I think I want to mention the kind of precedent of modalities and process and procedure that we’ve seen. So in the AHC, the negotiations, the UN Cybercrime Convention, in both a formal and informal way, there’s kind of evidence, even if the final output wasn’t what we would have been looking for, that you can use this kind of existing basis moving forward in other forums. So the modalities of the AHC was a bit more open for civil society and others to engage and provide input and have that be taken in and speak for the UN, which is great. And then also informally, there’s a brain trust organization group that was working with companies across the stakeholder lines to kind of advance our central aim. So I think that those two examples have been used then in other UN processes and forums to kind of replicate it to then build in a more multi-stakeholder approach to things, which I think is excellent. And also a selfless plug, which is GPD is now working for the WSIS Review, coordinating the Global Digital Rights Coalition, working with CSOs in the Global North and Global South, which you’ve seen as good practice, and other stakeholders. So we’ll be doing that moving forward. So another positive note to hopefully end on.


Peggy Hicks: Great, thanks. I’m going to skip over you, Jason. I’m going to go to Esteve, because you already put yours on the table. I’ll give you another chance, though.


Esteve Sanz: In April, we organized a global multi-stakeholder conference on what we call the governance of Web 4.0, which is essentially the impact of AI, quantum, et cetera, on the internet. So that global impact of those very powerful technologies, blockchain. and others into the global internet, not the governance itself of these technologies. This was a very well attended and very intense conference and there was a very prominent human rights angle. And what emerged from that is actually a series of principles that were object of consensus or rough consensus among the conference participants that basically set the ground so that we can continue being optimist in the context of this future internet, which is the stakes are much higher. What you can do with AI in terms of repression is massive. What you can do with AI in terms of freedom of speech and liberation and analysis of bureaucratic processes so that you empower citizens, etc., it’s also massive. So what we set up after that conference were this series of principles that set the ground to while we see the future internet emerging, if we want to continue seeing the internet because this is not a given, the internet, it’s what we make out of it, right? If we want to make that space to continue to be a tool for self-expression and for freedom and for democracy, etc., these are the principles that we think we should follow. So this was very, you know, it leaves us with a lot of optimism because it was relatively easy to, of course not every stakeholder was in that table, but it was relatively easy to come up with a series of principles that would chart a good path. So this is also impacting our position in the WSIS Plus 10D negotiations. We will bring up these higher stakes when it comes to these very powerful technologies impacting the internet, that if we don’t set things right, then things can go massively wrong very easily. And we hope that this is acknowledged in the UN context as well.


Peggy Hicks: Great. Back to you, Jason.


Jason Pielemeier: Yeah, maybe just mention one other collaboration across this table. The Rights and Risk Forum and the work we’re doing on the Digital Services Act and also trying to think about how we continue to… ensure that not just the risk assessments under the DSA, but those under the Online Safety Act and other digital regulations remain consistent with the UN guiding principles and broader international human rights frameworks, but also we’ve been working recently with GPD to empower civil society voices from the global majority to be more engaged in the WSIS process precisely so that we can support the kinds of initiatives that it sounds like the EU is eager to put forward and make sure that these are not just seen as sort of Western approaches that that don’t resonate and have support across the world. So just today, I think we’ll be publishing a series of reports from the partners in in nine different countries. We’ve done workshops at lightning pace over the last two months around the world with civil society actors in these different countries to help inform a wider audience and involve a wider group of stakeholders in the input processes to WSIS. Obviously that work will continue over the next several months until the end of this year when the WSIS process concludes. But I think it’s really important to emphasize WSIS as I mean here being here at the IGF as a just as such a critical moment for this community given that all of these new technologies are creating opportunities for governance to go in different directions and that direction could learn from and build on and incorporate the sort of multi-stakeholder human rights based values that we have successfully collectively pioneered as a community or they could go in a different direction. And so it’s it’s really a fork in the road. Not a phrase that I like to use anymore given the way it’s been misappropriated, but I think it’s a it’s a critical time for us to be here together at the IGF and really appreciate all of the panelists here sort of speaking about how we can continue to work towards that WSIS outcome that will sort of reinvigorate the multi-stakeholder


Peggy Hicks: And you jumped ahead again, which I think is really good. It shows we’re on the same track, because the next thing I wanted to ask, and I don’t see anybody lined up at the mic yet. Maybe somebody back there. Please come over and do it. I’ll throw out my question, too, and I’ll let you all choose. A number of you have focused on the difficulty sometimes in making sure that both the resources and the engagement is happening as effectively outside of Europe and a global north context, and figuring out how more can be done, both to reap the benefits of digital technology, but also to make sure that the tools and resources needed to have the types of conversations and engagement that we need in places without as many resources, how we can better make sure that that is happening. So I wanted to get your thoughts on that, but turning to our colleague here first. Please.


Audience: Thank you. Alejandro from Access Now, and I think very related to that comment is, what are the accountability mechanisms for these type of partnerships, especially when you’re working in the global south and it’s very easy for global north actors to disengage when these type of partnerships are happening? In your experiences, what are those accountability mechanisms that we can create?


Peggy Hicks: Great question. Thank you very much. So I will maybe just, Jason, you want to start on that one, if you’re doing quite a bit?


Jason Pielemeier: Sure. So I think accountability can take a lot of different forms, to Alejandro’s question. I think there are, you know, in GNI, for instance, we have an accountability mechanism that is built in to hold companies to the commitments that they make. And that’s a process that involves sort of very detailed review of internal company systems and policies. independent assessors. And as I mentioned at the beginning, we’ve been working really hard to sort of build more opportunities for a wider group of GNI members to be a part of those conversations. I think at the sort of multilateral level, the question of accountability has always been a somewhat vexing one. The Office of the High Commissioner for Human Rights does a really important, plays a very important role in calling out where states fall short of their commitments. But more tangible legal processes are lacking in many contexts. We do have, obviously, committees related to different treaty bodies that can produce reviews. We have the universal periodic review. We have the special mandates. So it’s not a barren field, but it is also one that is not, that still could be sowed with, I think, more seeds. I don’t know. I’ll stop trying to torture that analogy. And I think for some of these other spaces, whether it’s the IGF itself as a venue for collaboration or the WSIS process, the Global Digital Compact, yeah, it’s an open question, right? How do we ensure that not just the states that are producing the final text, but the other stakeholders who are committing themselves and involving themselves in those processes continue to carry them out? Part of it involves being at places like the IGF, where we can continue to sort of stand on stages and have to answer to audiences about what we’ve done since we’ve made these commitments. Part of it involves, I think, funding and being able to have support for watchdogs like Access Now and others in civil society. So it’s going to take a lot of different tools, but I think at least in this space, you know, we have forums and venues like this, which we sometimes take for granted, but I think we need to double down and reinvest in.


Peggy Hicks: Great, Esteve you want to say few words on accountability side?


Esteve Sanz: If you don’t get journalists, civil society activists, etc. to call out those abuses,


it’s going to get very difficult at the global level to trace that. Because, again, we are having this legitimacy gap between what is written, the safeguards, etc., and what we see in practice. And there is a fundamental problem of complexity and transparency that either you engage the multi-stakeholder community to tackle that, or we will simply not know.


Peggy Hicks: And I think that’s a lead-in for you, Len Manriquez, to both look at the accountability question from a civil society side and the role that it plays.


Ian Barber: Yeah, I mean, civil society can play a key role and, as was noted, serving as kind of a watchdog or an observer, even. And one that can then bring the issues or problems to light to the broader community, I think, is a central component and one that’s kind of overlooked, in a way. And I think that when you’re speaking about accountability in general, a lot of this comes down to transparency, openness, and decision-making in the processes and what’s been done in moving forward. And this should not just be a one-off event, as has been alluded to. It should be done in an iterative and ongoing way and in different manners. So I’ll keep it short and sweet.


Peggy Hicks: And, Alex, on your side, the company side?


Alex Walden: Yeah, I mean, I think, at least for GNI companies, Jason hit on a key piece for us, which is the independent assessment that we have as members of GNI. And so that’s a key way in which we are looking to ensure that we have accountability for our commitment to principles, the GNI principles in particular. And then, obviously, being transparent about our commitment to the UNGPs and how that manifests across our products. That looks like qualitative transparency about what our policies are, quantitative transparency about how we’re implementing them, enforcement measures, etc. And that’s not just for global majority, that’s the entire world and how we’re enforcing that. Obviously, we have the Digital Services Act in Europe. And so that is a sort of beginning entree of what sort of a risk assessment, a report that becomes public can look like. And so I think we’re all learning about what the value of something like that is for the purposes of accountability in a regulatory setting as well.


Peggy Hicks: Yeah, no, I think that’s a really good point. And thanks so much for the question, because I think it’s one where we really are learning now. And I think that’s an important thing to say, how useful are some of these tools going to be? Do they provide the value that we need? I think Len Manriquez’s point about the transparency pieces is absolutely crucial that without transparency, we don’t get to accountability very easily. But I’m sure there’s more we can do, and I’m sure Access Now will help us to figure it out. So thanks so much for the comment. And I’m getting the signal that we’re going to have to draw the session to a close. In doing so, I really want to thank those that are responsible for the organizing of it, which was not my office, but Christina Herrera from Google and Erlinson from ADAPT, who brought us all together today. We’re very glad to have a chance to talk through these issues with you. I hope you come away from it with some good ideas on potential collaboration, comments that you want to follow up on in the course of IGF going forward. And obviously, feel free to reach out to any of the panelists to get more information on some of those good practices that we’ve discussed. Thanks so much for joining us today. Exploring the Fascinating Minds of Octopuses Subscribe to our YouTube channel for more videos on Fascinating Minds of Octopuses!


I

Ian Barber

Speech speed

210 words per minute

Speech length

1384 words

Speech time

393 seconds

Narrative crisis with funding shifting toward national security and economic impact rather than human rights approaches

Explanation

Civil society organizations are facing a fundamental challenge where funding priorities are moving away from human rights-based approaches toward national security and economic impact considerations. This shift represents a crisis in how human rights work is valued and supported, requiring a rethink of advocacy strategies.


Evidence

This particularly impacts civil society in the global majority which are already less well resourced and able to make an impact


Major discussion point

Challenges Facing Civil Society in Digital Rights Advocacy


Topics

Human rights | Development


Disagreed with

– Esteve Sanz

Disagreed on

Approach to addressing funding crisis in civil society


Capacity issues due to lack of funding leading to layoffs, burnout, and insufficient expertise to participate effectively in forums

Explanation

The funding crisis has direct operational consequences for civil society organizations, resulting in reduced staff, exhausted workers, and inadequate technical expertise. This creates a vicious cycle where organizations cannot effectively participate in important policy forums and advocacy spaces.


Evidence

With lack of funding there’s less of an ability for civil society across the globe to be able to make an impact, resulting in layoffs, burnout and not having the expertise to come into these forums and spaces


Major discussion point

Challenges Facing Civil Society in Digital Rights Advocacy


Topics

Human rights | Development


Erosion of multi-stakeholder approach with closing mechanisms for inclusive and transparent civil society engagement

Explanation

There is a concerning trend toward state-led processes that exclude civil society input, undermining the multi-stakeholder governance model. This erosion occurs at national, regional, and global levels, preventing civil society from meaningfully contributing their expertise and advocacy perspectives.


Evidence

CSOs are not able to meaningfully engage and be a part of the decision-making process with a lack or closing of mechanisms that are inclusive and transparent, leading to state-led processes that don’t include the expertise and advocacy points of civil society


Major discussion point

Challenges Facing Civil Society in Digital Rights Advocacy


Topics

Human rights | Legal and regulatory


Agreed with

– Esteve Sanz
– Jason Pielemeier

Agreed on

There is a concerning gap between international commitments on digital rights and actual implementation


Proliferation of forums and processes making it difficult for under-resourced organizations to keep up and participate meaningfully

Explanation

The rapid expansion of policy forums and governance processes creates an overwhelming landscape for civil society organizations to navigate. With limited resources, organizations struggle to maintain effective participation across multiple venues, from traditional UN processes to new AI governance efforts.


Evidence

There’s been a proliferation of forums and processes – Geneva-based, UPR focused, treaty based, UN Cybercrime Convention, AHC, WSIS, AI governance efforts – making it quite difficult to keep up and have them well resourced


Major discussion point

Challenges Facing Civil Society in Digital Rights Advocacy


Topics

Human rights | Legal and regulatory


Disagreed with

– Jason Pielemeier

Disagreed on

Scale of multi-stakeholder engagement challenges


Need for civil society to be co-leaders rather than token participants, with structural support for effective engagement

Explanation

Effective collaboration requires moving beyond symbolic inclusion to genuine partnership where civil society organizations have leadership roles in policy development and governance frameworks. This necessitates structural changes that provide the resources and mechanisms needed for meaningful participation.


Evidence

Champion equity in partnerships, bringing in voices from the global majority as co-leaders rather than tokenism, facilitating access to knowledge and sharing so engagement can be effective and realized


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Development


Agreed with

– Jason Pielemeier
– Alex Walden

Agreed on

Multi-stakeholder engagement requires intentional effort and resources to be truly global and inclusive


Human rights approaches and security outcomes can be mutually reinforcing rather than opposing concepts

Explanation

Rather than viewing human rights and security as competing priorities, they should be understood as complementary and mutually supportive. This reframing challenges the false dichotomy often presented in policy discussions and provides a strategic approach for advocacy.


Evidence

I challenge them to say that human rights approaches and outcomes and security are not opposing things but can be very much mutually reinforcing concepts that can support one another


Major discussion point

Balancing Competing Pressures in Digital Rights Work


Topics

Human rights | Cybersecurity


Civil society’s watchdog role in bringing issues to light through transparency and ongoing iterative processes

Explanation

Civil society organizations serve a crucial accountability function by monitoring and exposing problems in digital rights protection. This role requires transparency, openness in decision-making processes, and continuous rather than one-off engagement to be effective.


Evidence

Civil society can play a key role serving as a watchdog or observer that can bring issues or problems to light to the broader community, requiring transparency, openness, and decision-making in an iterative and ongoing way


Major discussion point

Accountability Mechanisms and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Alex Walden
– Jason Pielemeier
– Peggy Hicks

Agreed on

Transparency is fundamental to accountability in digital rights protection


Coordination of Global Digital Rights Coalition for WSIS Review working with CSOs in Global North and South

Explanation

Global Partners Digital is coordinating a coalition that brings together civil society organizations from both developed and developing regions to participate in the WSIS review process. This represents a concrete example of inclusive global engagement in digital governance.


Evidence

GPD is now working for the WSIS Review, coordinating the Global Digital Rights Coalition, working with CSOs in the Global North and Global South


Major discussion point

Global Engagement and Resource Distribution


Topics

Human rights | Development


J

Jason Pielemeier

Speech speed

153 words per minute

Speech length

1784 words

Speech time

696 seconds

GNI’s intentional growth from North American/European focus to over 100 global members across four constituencies

Explanation

The Global Network Initiative has deliberately expanded from its original limited geographic scope to become a truly global organization with diverse membership. This transformation involved conscious efforts to reach out to organizations worldwide and demonstrate commitment to global dialogue rather than Western-dominated discourse.


Evidence

When GNI started 17 years ago, it was a relatively small set of mostly North American and European organizations. Today, we have over 100 members from every populated continent, working hard over the last decade to reach out to organizations of all types in different regions


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Development


Agreed with

– Ian Barber
– Alex Walden

Agreed on

Multi-stakeholder engagement requires intentional effort and resources to be truly global and inclusive


Disagreed with

– Ian Barber

Disagreed on

Scale of multi-stakeholder engagement challenges


Success story of MTN’s journey in developing human rights approach through multi-stakeholder engagement and GNI assessment process

Explanation

MTN, a South African telecommunications company, exemplifies how companies can successfully integrate human rights into their operations through multi-stakeholder collaboration. Their progression from initial engagement to developing comprehensive policies demonstrates the value of sustained partnership and accountability mechanisms.


Evidence

MTN developed a robust human rights statement, joined GNI in 2022, their transparency report has gotten much deeper and detailed, and they are now going through their first GNI assessment, creating opportunity to look inward at their systems and get feedback from stakeholders


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Economic


GNI’s independent assessment process for companies with detailed review of internal systems and policies

Explanation

GNI operates a comprehensive accountability mechanism that involves thorough examination of member companies’ internal human rights systems and policies. This process includes independent assessors and has been expanded to include broader member participation from around the world.


Evidence

We have a bespoke accountability process for our companies involving detailed review of internal company systems and policies with independent assessors, and we’ve made efforts to expand opportunities for members from across the world to participate in those assessments


Major discussion point

Accountability Mechanisms and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Ian Barber
– Alex Walden
– Peggy Hicks

Agreed on

Transparency is fundamental to accountability in digital rights protection


Role of OHCHR and treaty bodies in calling out state failures, though more tangible legal processes are needed

Explanation

While existing international human rights mechanisms like the Office of the High Commissioner for Human Rights provide important oversight functions, the current accountability landscape remains insufficient. More concrete legal processes and enforcement mechanisms are needed to address gaps in state compliance with digital rights obligations.


Evidence

The Office of the High Commissioner for Human Rights plays a very important role in calling out where states fall short. We have committees related to treaty bodies, universal periodic review, special mandates, but it’s not a barren field though still could be sowed with more seeds


Major discussion point

Accountability Mechanisms and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Esteve Sanz
– Ian Barber

Agreed on

There is a concerning gap between international commitments on digital rights and actual implementation


Internet remains vibrant space for freedom compared to offline mediums, especially in repressed contexts like Iran

Explanation

Despite concerning trends in digital repression, the internet continues to provide crucial spaces for freedom of expression and association that often exceed offline opportunities. This is particularly evident in authoritarian contexts where people rely on social media and circumvention technologies to access information and organize.


Evidence

If you compare offline and online realities for people in even the most repressed places on earth, there’s a real reason why they cling to social media spaces and the open Internet, using anti-censorship technologies. We don’t have to look far – just look at Iran today


Major discussion point

Balancing Competing Pressures in Digital Rights Work


Topics

Human rights | Freedom of expression


Series of workshops in nine countries to involve wider stakeholders in WSIS input processes

Explanation

GNI has conducted rapid-pace workshops across nine countries to expand participation in the WSIS review process beyond traditional Western voices. This initiative aims to ensure that global perspectives, particularly from the Global South, inform international digital governance discussions.


Evidence

We’ve done workshops at lightning pace over the last two months around the world with civil society actors in nine different countries to help inform a wider audience and involve a wider group of stakeholders in the input processes to WSIS, publishing a series of reports from partners


Major discussion point

Global Engagement and Resource Distribution


Topics

Human rights | Development


A

Alex Walden

Speech speed

180 words per minute

Speech length

1257 words

Speech time

418 seconds

Challenge of preventing online harms while respecting human rights, particularly freedom of expression, privacy, and non-discrimination

Explanation

The core operational challenge for tech companies is balancing harm prevention with human rights protection, requiring nuanced approaches rather than simple censorship. This involves developing tailored content moderation that removes genuinely harmful content while preserving fundamental rights to expression, privacy, and equal treatment.


Evidence

Just to censor is not what’s difficult. What’s difficult is to ensure that you’re respecting rights while you are trying to take a tailored approach to removing content that is harmful, specifically freedom of expression, privacy, and non-discrimination


Major discussion point

Technical and Operational Challenges for Tech Companies


Topics

Human rights | Content policy


Speed and scale issues requiring AI assistance for content moderation while maintaining human oversight for context-sensitive content

Explanation

The massive volume of content uploaded daily creates operational challenges that necessitate AI-assisted moderation systems. However, human moderators remain essential for content requiring contextual understanding, creating a hybrid approach that balances efficiency with accuracy.


Evidence

The amount of content being uploaded to our products every day means the volume is high and we need to address that at scale. We use AI and we’re increasingly using AI to help us do that faster, but there are human moderators that participate, especially for content that requires human context to understand


Major discussion point

Technical and Operational Challenges for Tech Companies


Topics

Human rights | Content policy


Complex regulatory environment requiring safe harbors for effective content moderation and policy iteration

Explanation

Companies need legal protections to implement effective content moderation practices and continuously improve their policies. The complex and varied regulatory landscape across jurisdictions makes it challenging to develop consistent approaches while meeting different legal requirements.


Evidence

We need safe harbors in order to do this work effectively to make sure that we are able to implement content moderation practices that are effective and iterate on our policies


Major discussion point

Technical and Operational Challenges for Tech Companies


Topics

Legal and regulatory | Human rights


Importance of showing up at venues where stakeholders are present and being part of curated conversations through organizations like GNI

Explanation

Effective stakeholder engagement requires companies to actively participate in forums where civil society and other stakeholders gather, rather than expecting stakeholders to come to them. This includes both large public venues and more focused organizational settings that facilitate deeper dialogue.


Evidence

It’s important for companies to show up to venues where our stakeholders are – things like IGF, venues like RightsCon – and being part of organizations where more curated versions of that conversation is taking place, like GNI


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Sociocultural


Agreed with

– Jason Pielemeier
– Ian Barber

Agreed on

Multi-stakeholder engagement requires intentional effort and resources to be truly global and inclusive


Need for regional stakeholder meetings to ensure feedback reaches policy-drafting and product-building teams

Explanation

Companies must establish systematic processes for gathering regional stakeholder input and ensuring this feedback directly influences policy development and product design. This requires structured programs that connect external expertise with internal decision-making processes.


Evidence

We have programs in place ensuring that we are doing regional stakeholder meetings with our global colleagues to make sure we’re hearing directly from experts about what’s happening in their region, their experience with our products, and ensuring that feedback goes directly to teams drafting our policies, enforcing our policies and building our products


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Development


Need to focus on human rights, national security, and innovation simultaneously rather than treating them as competing priorities

Explanation

Rather than viewing human rights, security, and innovation as zero-sum trade-offs, companies and governments must develop integrated approaches that advance all three objectives. This requires states to uphold their human rights obligations while pursuing security goals, and companies to maintain their human rights duties across all business activities.


Evidence

We have to figure out how to focus on all these things at the same time. In order to achieve national security interests and focus on innovation and competition, we have to ensure that human rights is integrated across those conversations. States have a duty to uphold their obligations to human rights in regulation and AI use, and companies have a duty too


Major discussion point

Balancing Competing Pressures in Digital Rights Work


Topics

Human rights | Cybersecurity


Digital Services Act as beginning model for public risk assessments providing accountability in regulatory settings

Explanation

The European Union’s Digital Services Act represents an emerging model for regulatory accountability through public risk assessments that companies must produce. This transparency mechanism is still being evaluated for its effectiveness in providing meaningful accountability while serving regulatory compliance purposes.


Evidence

We have the Digital Services Act in Europe as a beginning entree of what a risk assessment report that becomes public can look like. We’re all learning about what the value of something like that is for the purposes of accountability in a regulatory setting


Major discussion point

Accountability Mechanisms and Transparency


Topics

Legal and regulatory | Human rights


Agreed with

– Ian Barber
– Jason Pielemeier
– Peggy Hicks

Agreed on

Transparency is fundamental to accountability in digital rights protection


Rights and Risk Forum in Brussels as example of transparent conversation between stakeholders using concrete regulatory artifacts

Explanation

The Rights and Risk Forum convened by GNI and DTSP provided a model for productive stakeholder dialogue by focusing on concrete, publicly available risk assessments rather than abstract discussions. This approach enabled more substantive conversations about what works and what needs improvement in company practices.


Evidence

GNI and DTSP convened a risks and rights forum in Brussels for companies who are VLOPs and VLOSSs under the DSA to come together and have conversations about the assessments that are now public, having open, transparent conversation between civil society and companies about what’s working, what’s not, and how we can improve


Major discussion point

Global Engagement and Resource Distribution


Topics

Human rights | Legal and regulatory


E

Esteve Sanz

Speech speed

165 words per minute

Speech length

1564 words

Speech time

567 seconds

EU’s focus on global agreements like Global Digital Compact and Declaration for Future of Internet to commit states to respect digital rights

Explanation

The European Union has prioritized securing international commitments through multilateral agreements that establish binding obligations for states to protect digital human rights. These diplomatic efforts aim to create global standards that prevent internet censorship and shutdowns while promoting fundamental freedoms online.


Evidence

We have focused on getting agreements at the global level including the global digital compact, the declaration for the future of the internet that commit states and critical actors to respect digital human rights, not censor the internet, not doing internet shutdowns – a very important achievement in the Global Digital Compact that commits states in the UN not to shut down the Internet


Major discussion point

International Cooperation and Digital Human Rights Protection


Topics

Human rights | Legal and regulatory


Gap between diplomatic achievements in securing commitments and reality of digital repression on the ground

Explanation

Despite successful international negotiations that produce strong commitments to digital rights, there remains a troubling disconnect with the actual experiences of people facing digital repression worldwide. This gap represents a fundamental challenge where formal agreements fail to translate into meaningful protection for individuals and communities.


Evidence

There is this gap that is very puzzling between the diplomatic achievements that we have managed to do in committing global actors to respect fundamental freedoms online and what’s going on in reality. We are in a new stage where the Internet is not only controlled, but it’s used for control, and we see a very depressing trajectory


Major discussion point

International Cooperation and Digital Human Rights Protection


Topics

Human rights | Cybersecurity


Agreed with

– Jason Pielemeier
– Ian Barber

Agreed on

There is a concerning gap between international commitments on digital rights and actual implementation


EU’s public diplomacy efforts calling out internet shutdowns and funding projects like protectdefenders.eu for urgent support

Explanation

The European Union actively engages in public diplomacy to condemn internet shutdowns and digital repression while providing concrete financial support for at-risk individuals. This dual approach combines political pressure with practical assistance for journalists and civil society actors facing immediate threats.


Evidence

When there is a big event, an Internet shutdown, we engage in public diplomacy in Iran, in Jordan, so we have callouts for Internet shutdowns. We have projects like protectdefenders.eu which provides funding in case of urgent need for journalists and other civil society actors


Major discussion point

International Cooperation and Digital Human Rights Protection


Topics

Human rights | Freedom of the press


Disagreed with

– Ian Barber

Disagreed on

Approach to addressing funding crisis in civil society


WSIS Plus 20 review as opportunity for unprecedented UN language on digital human rights acknowledging rise of digital authoritarianism

Explanation

The World Summit on the Information Society review process presents a critical opportunity to establish stronger international language on digital rights that explicitly recognizes and addresses digital authoritarianism. The EU aims to achieve more concrete protections for journalists and civil society than have been included in previous UN resolutions.


Evidence

EU member states will take stock of the rise of digital authoritarianism and propose what we hope will be unprecedented language at the UN level in the WSIS plus 20 resolution on digital human rights, going much more concretely into statements that protect journalists, civil society, etc. from digital repression


Major discussion point

International Cooperation and Digital Human Rights Protection


Topics

Human rights | Legal and regulatory


EU’s legislative process through Digital Services Act demonstrates successful balance using Charter of Fundamental Rights as framework

Explanation

The European Union’s approach to digital regulation, exemplified by the Digital Services Act, shows how fundamental rights can be successfully integrated into complex legislative processes. The EU Charter of Fundamental Rights serves as an overarching framework that ensures all digital legislation complies with human rights standards.


Evidence

The Digital Services Act is the cornerstone of our digital regulation with a multi-stakeholder approach involving parliament, civil society, consultations, council, and commission. Whatever legislation we put on the table needs to comply with the Charter of Fundamental Rights, which frames everything we do in the EU on digital issues and shows us a path towards finding that balance correctly


Major discussion point

Balancing Competing Pressures in Digital Rights Work


Topics

Legal and regulatory | Human rights


EU’s Internet Accountability Compass project to analyze gap between commitments and digital repression reality

Explanation

The European Union has initiated a specific research and analysis project to systematically examine the disconnect between international commitments on digital rights and the actual practice of digital repression by states. This project aims to provide evidence-based understanding of how governments use internet technologies for control rather than just restricting access.


Evidence

We have the Global Initiative for the Future of the Internet that has a project called Internet Accountability Compass that will help us analyze this gap between what we are committing to and what’s really going on in terms of digital repression


Major discussion point

Global Engagement and Resource Distribution


Topics

Human rights | Cybersecurity


P

Peggy Hicks

Speech speed

176 words per minute

Speech length

2927 words

Speech time

992 seconds

OHCHR’s multi-faceted approach to digital rights through judicial engagement, regional studies, and cross-stakeholder projects

Explanation

The Office of the High Commissioner for Human Rights is actively working across multiple dimensions in the digital rights space, including collaborating with judiciary systems, conducting regional research, and facilitating multi-stakeholder engagement. This comprehensive approach aims to develop a ‘smart mix’ of mandatory measures and policy incentives that help states meet their human rights obligations while creating an environment where companies also contribute to rights protection.


Evidence

We had a recent event in Brazil working with the judiciary on social media regulation. We’ve done a study within the MENA region. We’re looking for a smart mix of mandatory measures and policy incentives that states can put in place


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Legal and regulatory


BTEC project as model for cross-sector engagement with tech companies on AI and content moderation challenges

Explanation

The BTEC project represents an innovative approach to multi-stakeholder engagement that brings together companies to address complex technical and policy challenges, particularly around AI and content moderation. The project has demonstrated value in strengthening how companies work together while providing OHCHR with insights that can be shared more broadly, though challenges remain in making the experience more global and engaging smaller enterprises.


Evidence

We have a project called the BTEC project that encourages cross-stakeholder, cross-sector multi-stakeholder engagement, focused on trying to work with companies to answer tough questions including around AI and content moderation. We have found that the work together with them has strengthened the way they work amongst each other


Major discussion point

Multi-Stakeholder Collaboration and Partnership Strategies


Topics

Human rights | Sociocultural


Importance of moving beyond high-level discussions to evidence-based case studies for meaningful progress

Explanation

Effective collaboration and improvement in digital rights protection requires moving from abstract, general conversations to concrete analysis of specific situations and failures. This approach enables more frank and useful discussions that can drive actual improvements in policies and practices through peer review and detailed examination of what went wrong.


Evidence

That evidence base, that idea of going beyond the general conversation to really talk about some specific case studies, something went wrong, putting what went wrong on the table sometimes and unpacking it and figuring out how to do better is really important. You can’t do that if you stay at the 10,000 feet level


Major discussion point

Accountability Mechanisms and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Ian Barber
– Alex Walden
– Jason Pielemeier

Agreed on

Transparency is fundamental to accountability in digital rights protection


Civil society exclusion weakens policy processes themselves, not just disadvantages civil society

Explanation

When civil society organizations are unable to provide input into policy processes, it represents a loss not only for those organizations seeking to have their voices heard, but fundamentally weakens the quality and effectiveness of the processes themselves. The expertise and real-world experience that civil society brings is essential for developing sound policies and frameworks.


Evidence

When civil society isn’t able to put their input, that’s not just a disadvantage to civil society who wants to have their voice heard, but to the process itself and it itself is weakened by the lack of the expertise that civil society, real experience that civil society can bring in


Major discussion point

Challenges Facing Civil Society in Digital Rights Advocacy


Topics

Human rights | Development


A

Audience

Speech speed

139 words per minute

Speech length

62 words

Speech time

26 seconds

Need for accountability mechanisms in global north-south partnerships to prevent disengagement

Explanation

There is a critical need to establish concrete accountability mechanisms when partnerships are formed between global north and global south actors in digital rights work. The concern is that without proper accountability structures, global north actors can easily disengage from these partnerships, leaving global south partners without support or follow-through on commitments.


Evidence

What are the accountability mechanisms for these type of partnerships, especially when you’re working in the global south and it’s very easy for global north actors to disengage when these type of partnerships are happening?


Major discussion point

Accountability Mechanisms and Transparency


Topics

Human rights | Development


Agreements

Agreement points

Multi-stakeholder engagement requires intentional effort and resources to be truly global and inclusive

Speakers

– Jason Pielemeier
– Ian Barber
– Alex Walden

Arguments

GNI’s intentional growth from North American/European focus to over 100 global members across four constituencies


Need for civil society to be co-leaders rather than token participants, with structural support for effective engagement


Importance of showing up at venues where stakeholders are present and being part of curated conversations through organizations like GNI


Summary

All speakers agree that meaningful multi-stakeholder collaboration cannot happen by accident – it requires deliberate investment of time, resources, and structural changes to move beyond tokenism to genuine partnership, particularly in engaging voices from the Global South.


Topics

Human rights | Development


Transparency is fundamental to accountability in digital rights protection

Speakers

– Ian Barber
– Alex Walden
– Jason Pielemeier
– Peggy Hicks

Arguments

Civil society’s watchdog role in bringing issues to light through transparency and ongoing iterative processes


Digital Services Act as beginning model for public risk assessments providing accountability in regulatory settings


GNI’s independent assessment process for companies with detailed review of internal systems and policies


Importance of moving beyond high-level discussions to evidence-based case studies for meaningful progress


Summary

All speakers emphasize that transparency – whether through public reporting, independent assessments, or open dialogue – is essential for holding both companies and governments accountable for their digital rights commitments.


Topics

Human rights | Legal and regulatory


There is a concerning gap between international commitments on digital rights and actual implementation

Speakers

– Esteve Sanz
– Jason Pielemeier
– Ian Barber

Arguments

Gap between diplomatic achievements in securing commitments and reality of digital repression on the ground


Role of OHCHR and treaty bodies in calling out state failures, though more tangible legal processes are needed


Erosion of multi-stakeholder approach with closing mechanisms for inclusive and transparent civil society engagement


Summary

Speakers acknowledge a troubling disconnect between formal international agreements and diplomatic commitments on digital rights versus the reality of increasing digital repression and exclusion of civil society from governance processes.


Topics

Human rights | Legal and regulatory


Similar viewpoints

Both speakers reject the false dichotomy between human rights and security/innovation, arguing instead that these objectives can and should be pursued simultaneously as mutually reinforcing rather than competing priorities.

Speakers

– Alex Walden
– Ian Barber

Arguments

Need to focus on human rights, national security, and innovation simultaneously rather than treating them as competing priorities


Human rights approaches and security outcomes can be mutually reinforcing rather than opposing concepts


Topics

Human rights | Cybersecurity


Both speakers emphasize the value of creating concrete forums and processes that bring stakeholders together around specific, tangible issues rather than abstract discussions, whether through regulatory compliance or global governance processes.

Speakers

– Jason Pielemeier
– Alex Walden

Arguments

Rights and Risk Forum in Brussels as example of transparent conversation between stakeholders using concrete regulatory artifacts


Series of workshops in nine countries to involve wider stakeholders in WSIS input processes


Topics

Human rights | Legal and regulatory


Both speakers argue that excluding civil society from policy processes is not just unfair to civil society organizations, but fundamentally weakens the quality and effectiveness of the policy-making process itself by removing essential expertise and perspectives.

Speakers

– Ian Barber
– Peggy Hicks

Arguments

Erosion of multi-stakeholder approach with closing mechanisms for inclusive and transparent civil society engagement


Civil society exclusion weakens policy processes themselves, not just disadvantages civil society


Topics

Human rights | Development


Unexpected consensus

Optimism about internet’s continued value despite digital repression trends

Speakers

– Jason Pielemeier
– Esteve Sanz

Arguments

Internet remains vibrant space for freedom compared to offline mediums, especially in repressed contexts like Iran


EU’s legislative process through Digital Services Act demonstrates successful balance using Charter of Fundamental Rights as framework


Explanation

Despite acknowledging serious challenges with digital repression and the gap between commitments and reality, both speakers maintain optimism about the internet’s fundamental value and the possibility of achieving proper balance through appropriate governance frameworks. This is unexpected given the generally pessimistic tone about current trends.


Topics

Human rights | Freedom of expression


Companies and civil society agreeing on need for regulatory safe harbors

Speakers

– Alex Walden
– Ian Barber

Arguments

Complex regulatory environment requiring safe harbors for effective content moderation and policy iteration


Narrative crisis with funding shifting toward national security and economic impact rather than human rights approaches


Explanation

It’s somewhat unexpected that both a company representative and civil society advocate would implicitly agree on the need for regulatory safe harbors, as civil society often pushes for stronger regulation while companies typically seek regulatory flexibility. Their shared concern about the current regulatory environment suggests common ground on the need for balanced approaches.


Topics

Legal and regulatory | Human rights


Overall assessment

Summary

The speakers demonstrate strong consensus on several key issues: the need for genuine (not tokenistic) multi-stakeholder engagement, the fundamental importance of transparency for accountability, and the concerning gap between international commitments and actual protection of digital rights. They also share concerns about the erosion of inclusive governance processes and the challenges facing civil society organizations.


Consensus level

High level of consensus on core principles and challenges, with speakers from different sectors (government, civil society, private sector, international organization) largely agreeing on both problems and solutions. This suggests a mature understanding of digital rights issues across stakeholder groups, though the consensus also highlights the urgency of addressing systemic challenges in funding, inclusion, and accountability mechanisms. The agreement across diverse perspectives strengthens the legitimacy of calls for more resources and structural changes to support effective digital rights protection.


Differences

Different viewpoints

Approach to addressing funding crisis in civil society

Speakers

– Ian Barber
– Esteve Sanz

Arguments

Narrative crisis with funding shifting toward national security and economic impact rather than human rights approaches


EU’s public diplomacy efforts calling out internet shutdowns and funding projects like protectdefenders.eu for urgent support


Summary

Ian Barber identifies a fundamental narrative crisis where funding is shifting away from human rights approaches, while Esteve Sanz presents the EU’s approach of maintaining funding for human rights work alongside security concerns, suggesting different perspectives on whether the shift is inevitable or can be countered


Topics

Human rights | Development


Scale of multi-stakeholder engagement challenges

Speakers

– Jason Pielemeier
– Ian Barber

Arguments

GNI’s intentional growth from North American/European focus to over 100 global members across four constituencies


Proliferation of forums and processes making it difficult for under-resourced organizations to keep up and participate meaningfully


Summary

Jason presents GNI’s expansion as a success story of inclusive growth, while Ian emphasizes how the proliferation of forums creates overwhelming burdens for under-resourced organizations, representing different views on whether expanding engagement opportunities helps or hinders effective participation


Topics

Human rights | Development


Unexpected differences

Optimism vs. pessimism about digital rights trajectory

Speakers

– Jason Pielemeier
– Esteve Sanz

Arguments

Internet remains vibrant space for freedom compared to offline mediums, especially in repressed contexts like Iran


Gap between diplomatic achievements in securing commitments and reality of digital repression on the ground


Explanation

This represents an unexpected philosophical divide where Jason emphasizes reasons for optimism about the internet’s continued value for freedom, while Esteve presents a more pessimistic assessment of digital repression trends, despite both working toward similar goals


Topics

Human rights | Freedom of expression


Overall assessment

Summary

The discussion revealed relatively low levels of direct disagreement among speakers, with most conflicts being subtle differences in emphasis, approach, or perspective rather than fundamental opposition. The main areas of disagreement centered on funding approaches, engagement strategies, and assessment of current trends.


Disagreement level

Low to moderate disagreement level. The speakers largely shared common goals and values around digital rights protection, but differed on tactical approaches, resource allocation strategies, and assessment of progress. These disagreements are constructive and reflect different organizational perspectives and experiences rather than fundamental ideological divisions. The implications are positive – the disagreements suggest a healthy diversity of approaches within a shared framework, which could lead to more comprehensive and effective strategies if properly coordinated.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers reject the false dichotomy between human rights and security/innovation, arguing instead that these objectives can and should be pursued simultaneously as mutually reinforcing rather than competing priorities.

Speakers

– Alex Walden
– Ian Barber

Arguments

Need to focus on human rights, national security, and innovation simultaneously rather than treating them as competing priorities


Human rights approaches and security outcomes can be mutually reinforcing rather than opposing concepts


Topics

Human rights | Cybersecurity


Both speakers emphasize the value of creating concrete forums and processes that bring stakeholders together around specific, tangible issues rather than abstract discussions, whether through regulatory compliance or global governance processes.

Speakers

– Jason Pielemeier
– Alex Walden

Arguments

Rights and Risk Forum in Brussels as example of transparent conversation between stakeholders using concrete regulatory artifacts


Series of workshops in nine countries to involve wider stakeholders in WSIS input processes


Topics

Human rights | Legal and regulatory


Both speakers argue that excluding civil society from policy processes is not just unfair to civil society organizations, but fundamentally weakens the quality and effectiveness of the policy-making process itself by removing essential expertise and perspectives.

Speakers

– Ian Barber
– Peggy Hicks

Arguments

Erosion of multi-stakeholder approach with closing mechanisms for inclusive and transparent civil society engagement


Civil society exclusion weakens policy processes themselves, not just disadvantages civil society


Topics

Human rights | Development


Takeaways

Key takeaways

Civil society faces a narrative crisis with funding shifting from human rights approaches to national security and economic impact priorities, leading to capacity issues and reduced ability to participate effectively in digital rights advocacy


Multi-stakeholder collaboration requires intentional effort and resources to be truly global and inclusive, moving beyond tokenism to meaningful co-leadership roles for civil society organizations


Tech companies face significant challenges balancing online harm prevention with human rights protection, particularly around speed/scale issues and complex regulatory environments


There is a concerning gap between international diplomatic achievements in securing digital rights commitments and the reality of increasing digital repression on the ground


Human rights, national security, and innovation can be mutually reinforcing rather than competing priorities when properly integrated into policy frameworks


Transparency is fundamental to accountability, with new models like the Digital Services Act providing examples of public risk assessments and stakeholder engagement


The internet remains a vital space for freedom of expression compared to offline alternatives, particularly in repressive contexts, making continued protection efforts essential


WSIS Plus 20 represents a critical fork in the road for determining whether future internet governance will build on multi-stakeholder human rights values or move in a different direction


Resolutions and action items

Global Partners Digital is coordinating the Global Digital Rights Coalition for the WSIS Review, working with civil society organizations globally


GNI and partners published reports from workshops in nine countries to inform wider stakeholder input into the WSIS process


EU will propose unprecedented language on digital human rights in the WSIS Plus 20 resolution, acknowledging the rise of digital authoritarianism


Continued Rights and Risk Forums will be held to discuss Digital Services Act implementation and other regulatory frameworks with concrete examples


EU’s Internet Accountability Compass project will analyze the gap between digital rights commitments and actual digital repression practices


Unresolved issues

How to adequately fund civil society organizations globally to maintain their capacity for digital rights advocacy


How to make multi-stakeholder engagement more effective and truly global, particularly including voices from the Global South


How to bridge the gap between international commitments on digital rights and actual state practices of digital repression


What specific accountability mechanisms can be developed for partnerships working in the Global South to prevent disengagement by Global North actors


How to scale successful collaboration models like GNI to include more small and medium enterprises


How to effectively integrate investor engagement in tech governance and human rights protection


How to maintain the open internet’s role as a space for freedom while addressing legitimate security and innovation concerns


Suggested compromises

Using human rights approaches as a way to ‘Trojan horse’ funding by demonstrating how human rights and security outcomes can be mutually reinforcing


Developing a ‘smart mix’ of mandatory measures and policy incentives that allows states to meet human rights obligations while enabling appropriate regulation


Creating iterative, ongoing engagement processes rather than one-off events to build trust and ensure sustained collaboration


Establishing safe harbors for companies to enable effective content moderation while maintaining human rights protections


Using existing successful process modalities from forums like the AHC negotiations as templates for more inclusive multi-stakeholder approaches in other venues


Thought provoking comments

I think that we are in a new stage where the Internet is not only controlled, but it’s used for control, and what we see is a very depressing trajectory. So there is this gap that is very puzzling between the diplomatic achievements that we have managed to do in committing global actors, very powerful global actors, to respect fundamental freedoms online and what’s going on in reality.

Speaker

Esteve Sanz


Reason

This comment reframes the entire discussion by distinguishing between the internet being ‘controlled’ versus being ‘used for control’ – a subtle but profound distinction that highlights how digital infrastructure has become a tool of oppression rather than just being restricted. It also identifies the core paradox of digital rights work: the gap between international commitments and ground reality.


Impact

This observation became a recurring theme throughout the discussion, with multiple panelists referencing this ‘gap’ between commitments and reality. It shifted the conversation from focusing solely on policy solutions to acknowledging the fundamental disconnect between diplomatic achievements and actual implementation, adding a layer of realism and urgency to the discussion.


So we’re dealing with both digital repression and digital depression. But I think it’s really important to remind ourselves… the Internet is still an incredibly vibrant and critical space, especially when you compare it to offline mediums for free expression and freedom of association and assembly.

Speaker

Jason Pielemeier


Reason

This comment is particularly insightful because it acknowledges the emotional toll of working in digital rights (‘digital depression’ – a play on Esteve’s ‘digital repression’) while providing crucial perspective. It challenges the prevailing pessimism by recontextualizing online spaces relative to offline alternatives, especially in repressive contexts.


Impact

This comment served as a pivotal moment that injected much-needed optimism into what had become a rather somber discussion about funding cuts, capacity issues, and rising authoritarianism. It reframed the conversation from one of defeat to one of continued purpose, reminding participants why their work matters and providing emotional grounding for the remainder of the discussion.


At the end of the day, the most impactful forms [of collaboration] are going to be those that truly shift power and resources back to civil society and allow them to engage… it’s not always structural support then to address them… it’s again this symbolic means of doing things.

Speaker

Ian Barber


Reason

This comment cuts through the diplomatic language often used in multi-stakeholder discussions to identify the core issue: the difference between symbolic inclusion and actual power-sharing. It challenges other panelists to move beyond tokenistic engagement to meaningful structural change.


Impact

This observation forced other panelists to be more specific about their collaboration efforts and accountability mechanisms. It elevated the discussion from general statements about ‘multi-stakeholder engagement’ to concrete questions about power dynamics, resource allocation, and genuine partnership, leading to more substantive responses about actual practices and challenges.


In order to achieve national security interests, in order to focus on ongoing innovation and have competition in the market, we have to ensure that human rights is integrated across those conversations and remains a priority… we have to do all of them at the same time.

Speaker

Alex Walden


Reason

This comment directly addresses one of the session’s central tensions by rejecting the false choice between human rights and other priorities. Instead of accepting trade-offs, it argues for integration – a more sophisticated approach that acknowledges complexity while maintaining principles.


Impact

This response helped shift the framing away from human rights as an obstacle to innovation/security toward human rights as an integral component of sustainable solutions. It influenced subsequent speakers to also reject the either/or framing and think more holistically about how different priorities can be mutually reinforcing rather than competing.


What are the accountability mechanisms for these type of partnerships, especially when you’re working in the global south and it’s very easy for global north actors to disengage when these type of partnerships are happening?

Speaker

Alejandro (Access Now)


Reason

This question from the audience cuts to the heart of power imbalances in international digital rights work. It challenges the panel’s discussion of partnerships by highlighting the structural inequalities that make such partnerships fragile and potentially exploitative.


Impact

This question forced all panelists to grapple with concrete accountability mechanisms rather than staying at the level of aspirational statements. It brought the discussion full circle to Ian Barber’s earlier points about power and resources, and prompted more specific responses about transparency, ongoing engagement, and structural supports for meaningful partnership.


Overall assessment

These key comments fundamentally shaped the discussion by introducing critical tensions and reframes that prevented the conversation from remaining at a superficial level. Esteve’s observation about the gap between commitments and reality established a sobering foundation that ran throughout the session. Jason’s ‘digital depression’ comment provided crucial emotional and strategic reframing that prevented despair from overwhelming the discussion. Ian’s focus on power dynamics challenged other participants to move beyond tokenistic approaches, while Alex’s integration argument offered a path forward that doesn’t sacrifice principles. Finally, Alejandro’s accountability question from the audience brought concrete urgency to abstract discussions of partnership. Together, these comments created a discussion that was both realistic about challenges and constructive about solutions, balancing acknowledgment of systemic problems with practical approaches for moving forward. The interplay between these perspectives created a more nuanced and actionable conversation than would have emerged from purely optimistic or pessimistic framings alone.


Follow-up questions

How to make cross-stakeholder engagement more global and better engage with small and medium enterprises

Speaker

Peggy Hicks


Explanation

This addresses the challenge of expanding beyond large companies to include smaller tech enterprises in human rights discussions and ensuring global representation rather than just North American/European perspectives


How to deal with investors within the tech space for human rights protection

Speaker

Peggy Hicks


Explanation

There’s a need to understand how to engage financial stakeholders who influence tech companies to prioritize human rights considerations in their investment decisions


How to assess the risks faced by human rights defenders through digital technology

Speaker

Peggy Hicks


Explanation

This was mentioned as part of a UN Human Rights Council resolution calling for specific work to understand and address threats to human rights defenders in digital spaces


How to bridge the gap between diplomatic achievements in human rights commitments and reality on the ground

Speaker

Esteve Sanz


Explanation

There’s a puzzling disconnect between global actors committing to respect fundamental freedoms online and the actual rise in digital repression that needs to be analyzed and addressed


How to ensure regulatory frameworks provide adequate safe harbors for effective content moderation

Speaker

Alex Walden


Explanation

Companies need clear legal protections to implement responsible content moderation practices while respecting human rights, but the complex regulatory environment makes this challenging


How to maintain human rights focus amid competing pressures from national security and economic competition narratives

Speaker

Peggy Hicks


Explanation

There’s a concerning trend where human rights considerations are being deprioritized in favor of security concerns and economic competitiveness, requiring strategies to maintain their importance


What accountability mechanisms can be created for partnerships working in the Global South to prevent disengagement by Global North actors

Speaker

Alejandro (Access Now)


Explanation

This addresses the need for structural safeguards to ensure sustained commitment and prevent abandonment of collaborative efforts in resource-constrained regions


How to better support civil society capacity building given funding challenges and proliferation of forums

Speaker

Ian Barber


Explanation

Civil society organizations face resource constraints while needing to engage across an increasing number of policy processes, requiring strategic approaches to capacity building and engagement


How to evaluate the effectiveness of new transparency and risk assessment tools like those under the Digital Services Act

Speaker

Alex Walden


Explanation

As new regulatory frameworks create public accountability mechanisms, there’s a need to assess whether these tools provide meaningful value for human rights protection


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Lightning Talk #107 Irish Regulator Builds a Safe and Trusted Online Environment

Lightning Talk #107 Irish Regulator Builds a Safe and Trusted Online Environment

Session at a glance

Summary

This discussion featured John Evans, Digital Services Commissioner at Coimisiún na Meán (Ireland’s media and online safety regulator), presenting how the organization contributes to media safety and aligns with the Global Digital Compact commitments. Evans explained that the regulator, established just over two years ago, has an unusually significant role in European digital regulation because many major tech platforms are headquartered in Ireland. The organization operates under six strategic areas: children’s protection, democracy, consumer trust, diversity and inclusion, culture, and public safety, all of which align closely with Global Digital Compact principles of human rights, internet governance, digital trust, and information integrity.


Evans detailed Ireland’s role as a Digital Services Coordinator under the EU’s Digital Services Act, explaining how this involves complex coordination at international, bilateral, and domestic levels. The regulator handles approximately 80% of complaints against online platforms due to Ireland’s status as their European base. He focused particularly on two strategic areas: democracy and children’s protection. Regarding democracy, Evans described extensive work during Ireland’s election year, including developing candidate protection packs and coordinating with other European regulators to address electoral integrity challenges. For children’s protection, he outlined both content-focused approaches through Ireland’s online safety code and systems-focused measures under the Digital Services Act.


The organization has grown rapidly from 40 to over 200 staff members, with plans to reach 300, demonstrating Ireland’s serious commitment to digital regulation. During the Q&A session, Evans addressed questions about resource allocation, policy implementation challenges, and coordination with other regulators, emphasizing the network-based approach of European digital regulation and Ireland’s responsibility to regulate not just for Irish citizens but for all Europeans.


Keypoints

**Major Discussion Points:**


– **Ireland’s unique regulatory role in Europe**: As home to many major tech companies (15 of 25 very large online platforms), Ireland’s media regulator Coimisiún na Meán has an outsized responsibility, handling approximately 80% of complaints against online platforms across Europe through the Digital Services Act framework.


– **Electoral integrity and democracy protection**: The regulator’s comprehensive approach to safeguarding elections, including developing toolkits with other European coordinators, creating candidate protection packs, and implementing measures to combat disinformation while supporting safe political participation online.


– **Child protection and online safety**: A two-pronged regulatory approach addressing both content (through Ireland’s online safety code prohibiting harmful content like self-harm promotion) and systems (through Digital Services Act provisions requiring platforms to protect minors’ safety, security, and privacy).


– **International coordination and network governance**: The complex web of relationships required for effective digital regulation, including cooperation with European Digital Services Coordinators, domestic agencies, NGOs, and international bodies like the Global Online Safety Regulators Network.


– **Resource allocation and enforcement challenges**: The regulator’s growth from 40 to over 200 staff (targeting 300), prioritization strategies based on risk assessment, and the balance between policy development speed and the urgency of addressing online harms.


**Overall Purpose:**


The discussion was a presentation by Ireland’s Digital Services Commissioner explaining how the country’s media regulator contributes to online safety and democratic values, both domestically and across Europe, followed by a Q&A session addressing practical regulatory challenges and enforcement approaches.


**Overall Tone:**


The tone was professional and informative throughout, with the presenter demonstrating confidence in Ireland’s regulatory approach while acknowledging significant challenges. During the Q&A, the tone became more conversational and collaborative, with the commissioner showing openness to dialogue and willingness to share experiences. There was a notably positive moment when an audience member complimented the regulator’s integrity, contrasting it favorably with Ireland’s data protection regulation, which seemed to energize the discussion around Ireland’s evolving regulatory reputation.


Speakers

– **John Evans**: Digital Services Commissioner at Coimisiún na Meán (Ireland’s media and online safety regulator)


– **Maria Farrell**: Irish citizen, digital and human rights activist


– **Audience**: Multiple audience members asking questions (roles/expertise not specified)


**Additional speakers:**


– **Niamh Hannafin**: Assistant Director for International Affairs at Coimisiún na Meán, Ireland’s media and online safety regulator


– **Paul**: Colleague of John Evans involved with new legislation on the democracy side (specific title not mentioned)


Full session report

# Comprehensive Discussion Report: Ireland’s Digital Services Regulation and the Global Digital Compact


## Overview and Context


This discussion featured John Evans, Digital Services Commissioner at Coimisiún na Meán (the Irish language name for Ireland’s media regulator), presenting how the organisation contributes to media safety and aligns with Global Digital Compact commitments. The session, introduced by Niamh Hannafin and including contributions from Maria Farrell (a fellow Irish citizen) and multiple audience members, provided an in-depth examination of Ireland’s unique role in European digital regulation and the practical challenges of implementing comprehensive platform oversight.


The discussion took place against the backdrop of Ireland’s distinctive position in the European digital landscape, where the country hosts 15 of the 25 very large online platforms regulated under the EU’s Digital Services Act. This geographical concentration of major tech companies has transformed Ireland’s media regulator into a body with responsibilities extending far beyond its national borders, effectively making it a key player in protecting digital rights across the entire European Union.


## Ireland’s Unique Regulatory Position and Organisational Structure


Evans began by explaining the extraordinary scope of Coimisiún na Meán’s responsibilities, emphasising that the organisation, just over two years old, handles approximately 80% of complaints against online platforms across Europe. This disproportionate responsibility stems from Ireland’s status as the European headquarters for major technology companies.


The regulator operates under six strategic areas that align closely with Global Digital Compact principles: children, democracy, consumer protection from exploitation and scams, diversity and inclusion, culture, and public safety. These areas correspond directly to the Compact’s focus on human rights, internet governance, digital trust, and information integrity, demonstrating how national regulatory frameworks can support international digital governance objectives.


The organisation’s rapid expansion reflects Ireland’s commitment to its new regulatory mandate. Evans detailed how the regulator has grown from 40 to just over 200 staff members, with plans to reach 300 employees within another six to nine months. Just over half of current staff support online safety work. This dramatic scaling represents a significant investment in regulatory capacity and signals a transformation from the organisation’s previous incarnation as the Broadcasting Authority of Ireland.


## Electoral Integrity and Democracy Protection


One of the most detailed aspects of Evans’s presentation focused on the regulator’s work protecting democratic processes, particularly during Ireland’s election year. The approach demonstrates the complexity of safeguarding electoral integrity in the digital age, requiring coordination across multiple levels and stakeholders.


The regulator developed comprehensive election guidelines and toolkits in collaboration with other European Digital Services Coordinators. These tools address electoral integrity challenges by requiring platforms to implement specific measures, including elevating official sources of information and limiting the spread of disinformation during critical electoral periods.


A particularly innovative initiative was the creation of candidate protection packs, developed in cooperation with Irish police. These resources help politicians understand how to respond when targeted online during elections, providing practical guidance for maintaining safe participation in public life. Evans noted that the regulator is currently conducting research to evaluate the effectiveness of these packs, indicating a commitment to evidence-based policy development.


The democratic protection work extends beyond individual elections to encompass broader concerns about political participation and media freedom. However, this area also highlighted some implementation challenges, particularly regarding new media privilege rules for journalistic content in platform content moderation systems, where platforms remain uncertain about execution requirements.


## Child Protection and Online Safety Framework


Evans outlined a sophisticated two-dimensional approach to protecting children online, addressing both content-specific harms and systemic platform design issues. This comprehensive framework demonstrates how modern digital regulation must operate across multiple regulatory instruments to achieve effective protection.


The content dimension operates through Ireland’s online safety code, which prohibits platforms from hosting harmful material such as content promoting self-harm or eating disorders. This approach focuses on removing specific types of dangerous content that could directly harm young users.


The systems dimension, implemented through the Digital Services Act, requires platforms to protect minors’ safety, security, and privacy through structural changes to their operations. This includes prohibiting addictive design features targeted at children and implementing robust age verification systems. Evans noted that guidance from the European Commission on Article 28 of the Digital Services Act, which specifically addresses minor protection, will emerge later in the year.


The regulator is pursuing coordinated enforcement actions regarding adult sites, following the European Commission’s investigations into four adult sites. Digital Services Coordinators are examining coordinated action for platforms below the 45 million user threshold, with Coimisiún na Meán serving as vice chair of the working group. Additionally, educational initiatives are being expanded, with the “rights rules and reporting online educational resource” distributed to primary schools and plans for cinema-based awareness campaigns targeting parents during summer months, with Evans hoping for “a rainy summer in Ireland as usual” to maximize cinema attendance.


## International Coordination and Network Governance


A significant portion of the discussion addressed the complex web of relationships required for effective digital regulation in an interconnected world. Evans emphasised that the Digital Services Act creates a network-based approach to regulation, moving beyond failed self-regulatory models to establish meaningful coordination between member states and the European Commission.


This network approach involves multiple levels of coordination: international cooperation through bodies like the Global Online Safety Regulators Network, bilateral relationships with other European regulators, and domestic coordination with various agencies and civil society organisations. The complexity of these relationships reflects the inherently cross-border nature of digital platforms and the harms they can facilitate.


Evans highlighted the value of learning from other regulatory approaches, specifically mentioning Australia’s eSafety Commission and the UK’s Ofcom as examples of different models being tested globally. This international perspective suggests that effective digital regulation will emerge through experimentation and knowledge sharing rather than a single prescribed approach.


However, the network model also raises concerns about potential vulnerabilities. An audience member questioned how Ireland would handle coordination with member states that might have weak digital service coordinators or experience rule of law backsliding. Evans responded that the European Commission and Digital Services Board provide protective mechanisms through shared accountability and mutual support, though this remains an area requiring ongoing attention.


## Resource Allocation and Enforcement Challenges


The discussion revealed significant tensions around resource adequacy and prioritisation in digital regulation. An audience member’s direct question about resource allocation prompted Evans to provide detailed insights into how the regulator manages its enormous mandate with finite resources.


The regulator employs a risk-based prioritisation approach, considering factors such as platform reach, user demographics, and past enforcement actions across Europe. This systematic approach attempts to focus regulatory attention on areas where intervention can have the greatest impact on user safety and rights protection.


However, the scale of the challenge remains substantial. Evans acknowledged the tension between the need for rapid action due to the severity of emerging harms and the time typically required for regulatory frameworks to prove their effectiveness. The resource challenge is compounded by Ireland’s responsibility to regulate not just for Irish citizens but for all Europeans using platforms headquartered in Ireland.


## Implementation Challenges and Practical Concerns


One area of discussion emerged around the gap between policy aspirations and regulatory reality. An audience member from the OECD raised concerns about policy discussions that propose interventions without adequate consideration of enforceability and implementation challenges.


This highlighted tensions in digital governance between the pressure to develop responses to digital harms versus the practical constraints facing regulators who must actually implement and enforce policies. The audience member argued that regulatory expertise is often inadequately integrated into policy development processes.


Evans acknowledged this challenge while defending the approach of working within existing frameworks and learning from implementation experience. He provided context about how internet regulation evolved from a “hands-off” approach to current targeted legislation like the Digital Services Act and Digital Markets Act, suggesting that regulatory frameworks must develop iteratively.


## Ireland’s Regulatory Reputation and Independence


A significant moment in the discussion came when Maria Farrell directly addressed Ireland’s regulatory approach. She noted positive recognition of Coimisiún na Meán’s work, stating that the regulator has gained recognition for “acting with strength and integrity as a regulator,” contrasting this with criticism of other regulatory approaches.


Evans responded by emphasising the importance of having a clear strategic direction and mission that serves as a “North Star” for the organisation, suggesting that consistent principles help maintain regulatory independence despite changing contexts. He also noted the importance of political support for the regulator’s mandate.


This exchange highlighted the critical importance of regulatory credibility and independence in digital governance, particularly in jurisdictions where economic interests might otherwise influence regulatory effectiveness.


## Key Areas of Discussion and Consensus


The discussion revealed substantial agreement on several key points. All participants acknowledged the enormous scope and complexity of platform regulation, recognising that limited resources require careful prioritisation and strategic thinking about regulatory intervention.


There was strong agreement on the importance of cross-border regulatory coordination, even while acknowledging the challenges this creates when some member states may have weaker regulatory capacity. The network-based approach of the Digital Services Act was generally viewed as a positive development over previous self-regulatory models.


The discussion also revealed agreement on the urgency of regulatory action despite implementation challenges. While participants acknowledged significant constraints in current approaches, there was consensus that waiting for perfect solutions is not viable given the severity of emerging digital harms.


## Ongoing Challenges and Future Considerations


Several significant issues remain challenging, highlighting areas requiring continued attention. The challenge of coordinating with member states that may have weak digital service coordinators represents a potential concern in the European regulatory network.


The implementation of new media privilege rules for journalistic content in platform content moderation remains unclear, with platforms reportedly uncertain about execution requirements. This reflects broader challenges in translating policy objectives into practical platform operations.


Questions about long-term regulatory sustainability and maintaining effectiveness across changing political contexts represent ongoing considerations for digital regulators.


## Conclusion


This discussion provided valuable insights into the practical realities of digital regulation in contemporary Europe. Ireland’s experience as a Digital Services Coordinator demonstrates both the possibilities and constraints facing regulators tasked with protecting digital rights and democratic values.


The conversation revealed that effective digital regulation requires appropriate legal frameworks, adequate resources, political support, international cooperation, and ongoing adaptation to evolving challenges. The emphasis on evidence-based policy development, international cooperation, and maintaining regulatory independence provides a foundation for continued progress in this critical area of governance.


Evans’s presentation and the subsequent discussion highlighted both the significant challenges and the practical approaches being developed to address digital harms while protecting fundamental rights and democratic processes in an interconnected digital environment.


Session transcript

John Evans: Hello, hi there. Good afternoon. Thank you very much for attending and thank you to IGF. My name is Niamh Hannafin. I’m Assistant Director for International Affairs at Commissioon na mBan, Ireland’s newly established media and online safety regulator. I’m very pleased to introduce to you our commissioner, our Digital Services Commissioner, John Evans Evans, who’s going to talk you through the ways in which we are contributing to a healthy media and online landscape in Ireland, but also towards some of the key commitments of the Global Digital Compact. Over to you, John Evans. Thanks, Niamh. Okay. So I can see the slides. So I guess first of all, Commissioon na mBan, it’s the Irish language for the media regulator. So we’re a new regulator. We’re just over two years old. We have a pretty broad mandate covering online safety through media development with a particular emphasis on Irish culture as well, which is an important part of our identity as an organisation. We have an unusual or sort of an outsized role in the European setting because so many of the large, the very large online platforms, so many of the large big tech companies are established in Ireland. And our mission, let me say a quick word about the companies we regulated. So that I mentioned media development, I mentioned online safety, and I mentioned kind of broadcasting regulation. And you can see just quite a variety of recognisable brands in there. It means quite a span of work for the organisation. You’ll see that our mission here, particularly recognises the role that the media plays in underpinning fundamental rights and in fostering an open and democratic pluralistic society. CNAM’s vision of a thriving, diverse, creative, safe and trusted media landscape and our strategic direction very closely align with the Global Digital Compact. The Global Digital Compact emphasises human rights, internet governance, digital trust and information integrity. And as you’ll see, as I’m talking through our strategy in a moment and then a few examples that these principles kind of shine through very clearly. So this is like a busy sort of a slide, but what I’ll talk to you very quickly is the six strategic areas or areas of emphasis. Children, so we want a media landscape that upholds the rights, wellbeing and development of children and their safe engagement with content. Democracy, a media landscape that supports democracy, democratic values and underpins civic discourse and reduces the impact of disinformation. We also want a media landscape that consumers can trust where they are protected from exploitation and scams. Diversity and inclusion, a media landscape that promotes the values of justice, equality and diversity. And then finally, culture, a media landscape that is sustainable, pluralistic and participative and that reflects who we are as a society. And again, as an Irish person, our culture is emphasised very importantly in our mission. The last one is public safety. This is kind of a broad one and it captures everything from terrorist content online through to, for example, a response to emergency situations. A word on our regulatory approach. Empowering people and ensuring that they have the tools to understand media, they have information to make decisions, to make good decisions, that’s part of our toolbox. Supporting and developing the Irish media landscape. We see that as very symbiotic between, on the one hand, the cultural aspect but on the other, navigating the online world. So there, for example, journalism schemes that we would support is a very important part of our toolkit. We are a research and future-focused organisation so it’s important to understand from a market perspective and technology perspective what the future is going to look like and how we can expect things to change and how the regulatory response should adapt as we move on. At the core, however, is holding regulated entities to account. So our role is really moving beyond a self-regulatory model which, in many respects, hasn’t worked. I want to say a bit more about an internet governance ecosystem. So within our delivery tools, if you like, we have included, on the one hand, collaborating for impact and then also influencing the European framework. So what I’m going to talk a little bit about is sort of the C, the coordinator, in the Digital Services Coordinator role. So under the Digital Services Act, Commissioner Mann is the Digital Services Coordinator for Ireland. And that C is quite a complicated role, that coordinating role is quite complicated. So if you think at the international level, the fulcrum of what we do is really around participation in the European network along with other Digital Services Coordinators and the European Commission, but also in other media networks supporting regulation. Bilateral relationships with other Digital Services Coordinators across Europe, other media regulators frequently, but other kinds of regulators as well, is very important. The reason for this is that on an operational level, when someone wants to complain about a platform that is established in Ireland, they need to make the complaints to their local Digital Services Coordinator and that’s transmitted to the Irish regulator. So that means that the Irish regulator is responsible for dealing with upwards of 80% of the complaints against online platforms. So that’s a very operational role that we have. And then in the experience sharing space, we try to participate in other kind of fora that go beyond Europe. So for example, the OECD, some of the UN organizations, but also the Global Online Safety Regulators Network of which we’re a founding member. Then domestically, it’s also reasonably complicated. So we’re not the only competent authority under the Digital Services Act in Ireland. Our competition and consumer protection is also one. So relationships with that organization are very important. Our police service on Garda Síochána we do need to cooperate with them as well. They have a role under the Digital Services Act and more broadly as a complimentary agency in the online safety space. Other digital regulators. So for example, we have close relationships with our Telecommunications Regulator and importantly our Data Protection Commission with whom we’re also drafting a cooperation agreement. But then there’s a wider set again of agencies that are involved in the different areas of harm that I mentioned, say, or that are addressed by our strategic objectives. So for example, the Electoral Commission in the electoral integrity space, that’s one example. But there are many departments and agencies that fall into this category from our Department of Health through to our Electoral Commission and in many NGOs. So non-government organizations are also very important in this area. So the coordinating role is really quite a demanding role in terms of internet governance bodies in this space. Now of the six strategic objectives that I outlined earlier, I want to go into a bit more detail about two of them. They are the democracy one and the children one, okay? And the reason I want to spend a bit more time on these now is because as I said at the beginning, beginning, right at the beginning, we’re a new agency and we’ve been developing our capacity to address these areas of these different strategic objectives. Two very important ones right from the outset were democracy and minor protection. So on the democracy side last year was, you know, it’s often said now it was the year of elections and in Ireland it was no difference. So last year we had a European parliamentary elections, we had a referendum, we had local elections and we also had general election and then actually later this year we will have a presidential election so there’s no let-up. But across Europe, right across Europe, there were European elections obviously and many general elections as well in which we had some role. So we engaged intensively in our own elections but we played a supportive role within the network of digital services coordinators across Europe. So digital regulation, it works best when there is coordination across countries. Elections have become a lightning rod, if you like, for newly created governance structures, for example in the DSA and DSC’s last year and this year worked closely with the EEC to share best practice, exchange election experiences and collectively solve problems and develop tools. And it’s it’s a really good example of where the Digital Services Act and the network of agencies involved working together in a horizontal way to address problems can be quite effective. So while an election is not an emergency, it does require a degree of agility on the part of regulators, they need to be responsive to changing circumstances. So with the EU, the digital services coordinators have developed a toolkit, if you like, to help address some of the challenges that arise in the context of electoral integrity. So first, early last year, it was one of the first things that the Digital Services Board, the newly established board under the Digital Services Act, approved was the guidelines, the election guidelines. So the guidelines include measures aimed at platforms, recommendations aimed at platforms for measures which mitigate the risks to electoral integrity, such as elevating official sources of information around electoral processes, demonetizing and limiting the spread of disinformation, labelling political advertising and importantly an onus on platforms to build internal teams that are capable of addressing national local elections. As most of the, actually 15 of the 25 very large online platforms are based in Ireland, we had to participate, we were privileged to participate in many of the pre-election preparations in the in different member states. So that involved attending workshops and scenario planning and then roundtables involving whoever the local agencies and bodies who were in the kind of local electoral ecosystem. So we did one of our own of these and so at that we would have had our Electoral Commission, we would have had the platforms, we would have had some fact-checking agencies, we had also An Garda Síochána very importantly, our police force. There’s a couple of extra points I just want to make in relation to electoral integrity. One is that, this is very important I feel, is supporting that safe participation of politicians in public life. We undertook a specific initiative with our police force, An Garda Síochána, last year to develop a candidate pack. So the candidate pack was developed first for the European elections and local elections and then further enhanced for our general election. The candidate pack was aimed at candidates participating in these elections, so they would know what to do and have to hand quickly information about how to respond when they were targeted, if they were targeted for whatever reason, online. We feel that made a difference but we’re conducting research at the moment to find out exactly how that helped and I think this is an area where we look to develop. The second area that I want to focus on is child rights or minor protection. So children’s rights and the protection online has become an issue of concern worldwide and it’s critical that action is taken. Different regulatory approaches are being tested in different parts of the world, the social media bans for example are being proposed in several countries. In Ireland and within Europe we see this problem as having two dimensions, first is sort of a content dimension and the second is systems. So on the legislative instruments that we have are our Digital Services Act which we feel addresses principally the systems aspect and then our online safety code which comes from the Audiovisual Media Services Directive as part of the transposition of that directive international law. So our online safety code, it clearly defines and lays out the kind of content that children need to be protected from. So regulated platforms must preclude the uploading and sharing of content that promotes self-harm or suicide, eating and feeding disorders and cyberbullying. They also requires the use of age assurance to ensure that children are not normally exposed to videos that contain pornographic or depictions of gross gratuitous violence. There are also provisions relating to parental controls. On the system side, the Digital Services Act on the other hand, it’s a content neutral instrument and instead it has provisions that mean that platforms need to take appropriate measures to protect the safety, security and privacy of minors, that’s the wording of Article 28 of the Digital Services Act, the safety, security and privacy of minors. How platforms are supposed to go about implementing that article of the DSA will be informed by guidance that the European Commission has recently consulted on and which will emerge later this year. The draft guidelines you’ll have seen are quite extensive and but they cover issues covering related to prohibiting addictive design features, age verifications to prevent minors viewing age-inappropriate content, having child accounts or accounts aimed at teenagers set to the highest level of privacy and recommender systems that do not result in the repeated exposure of content that could pose risks to their safety or security. It’s important to say as well that there are also key enforcement activities already underway, so just a couple of weeks ago the European Commission announced the opening of investigations into four adult sites, so these are very large online platforms and so they fall within the purview of the European Commission, but to complement that action the Digital Services Coordinators who have responsibility for below threshold, so these are platforms including adult platforms of which there are many, that have numbers of users below the 45 million threshold that defines the very large online threshold. So to complement that action that the European Commission is taking, the Digital Services Coordinators across the Member States are looking into a coordinated action to address the similar problems arising on the below threshold adult sites. Commissioner Naaman is quite active on that, we are the vice chair of the working group of the Digital Services Board that is looking to help develop a coordinated action. But it’s not all just about enforcement as well, aside from our regulatory powers Commissioner Naaman also supports rights of children through other initiatives such as raising awareness and media literacy efforts, so last year for example we published rights rules and reporting online educational resource and that has been distributed to primary schools throughout the country and later this year we’re looking to extend that to primary age children as well. Alongside that we will be doing kind of a fairly extensive awareness raising campaigns to support one the schools but also parents and during the summer we’re actually going to run a media campaign in cinemas in the hope that we’ll have a rainy summer in Ireland as usual and parents will take their children to see see movies and we’ll get to see that particular advertisement. Just a couple of comments to round up and the challenges that we face as digital regulators whether it’s promoting children’s welfare or safeguarding democracy, they’re not isolated issues, they’re interconnected challenges that require a coordinated and innovative response that put fundamental rights at the centre. Ireland’s unique position as home to some of the major tech companies means that we have quite a heavy responsibility but also an opportunity. We’re not just regulating for Ireland in many respects, we’re also regulating for European citizens. The global digital compact and division of an inclusive open safe and secure digital space is not just an aspiration, it’s a practical framework and it’s reflected very very clearly in our organisation’s strategic statement. As we look forward Commissioner Mann remains committed to not just regulating the digital future but actively shaping it in service of an open democratic and pluralistic society. The work we do today will determine whether technology serves humanity’s aspirations or undermines them so I think we are at a critical moment in Europe in particular and we will see whether or not the regulatory measures and systems and frameworks that we put in place and which are now developing that they will have impact. It’s an interesting time. Thank you. Any questions now, if anybody’s interested? Thank you very much. It sounds like you have a lot of obligations and a huge task and not to put more pressure on you but we kind of count on you to take you know to take on the battle against the platforms for the rest of us in the EU. How do you make your choices? Do you have enough resources? What is your policy on prioritising with the resources that you have given all the challenges that there are? Yeah sure, when we started we just had 40 people in the organisation that was the Broadcasting Authority of Ireland that was a kind of legacy organisation. They did traditional broadcasting regulation so to that remit was added the online safety brief which is huge right and we’re now at just over 200 people and we think within another six to nine months we’ll be at 300 people. About a hundred and just over half I think of those are kind of in one way or another supporting the online safety side of the work of the organisation. So Ireland has taken the responsibility quite seriously and really put the resources into that and any time we’re asked politically we always get the support that we’re looking for and it is an important mandate and society has rallied around it. We could even see it in the kinds of people that have come to work with us since we had a concern maybe that we might not be able to attract because we’re a public service organisation great people but we have people have been really interested in in the mandate. Second thing I’d say is that we’re not alone as a regulator here I do want to emphasise the network nature of the regulatory approach in Europe. So you know my French colleagues often describe the digital services that network or digital services I can act on the network approach as a sort of a team with the European Commission being the captain. Ireland has a very important role to play not because so many of the platforms are based here and but we do have the support of the Commission and also other digital services coordinators. On prioritisation we have said publicly and we are developing the mechanisms in the background to see how we can focus our regulatory efforts more precisely. So at the very highest level I articulated you know if you just set aside the Irish culture one for just for a second the other five you can kind of invert them and think of them as areas of harm. So online hate, the undermining of electoral processes and so on. Those are kind of five kind of high-level areas that we try to focus on. But on top of that we’ve tried to layer or we’re trying to layer kind of this what we call this risk-based approach. So what kind of what kind of reach does a platform have? Does it for example I have a lot of young users? If it does then it’s going to move up at the move up the rankings in terms of potential risk under the child protection strategic objective. And there’s many of those kind of characteristics that you can observe and from service characteristics to how many for example takedown orders have been issued by competent authorities across Europe against the particular platform. That tells you how a particular platform is setting up its trust and safety business.


Audience: Thanks so much for this interesting overview. I have two questions that are somehow interrelated. And the first one is so I work at the OECE in a policy space where very often I think for this big if we go back to kind of like this power concentration and big tech and platform perspective very often in the policy discussions there are calls for interventions that are maybe not necessarily enforceable right. So very often then in this discussion we have and people raising concerns about this is not implementable or enforceable from the legal side or from the regulatory side. So my first part of the question would be do you feel that you with your expertise and specific and background knowledge on how complex these issues are that this is also taken up on the other side of the spectrum in the policy and regulatory development whether there’s kind of this interlinkage. And then the second part of the question which is maybe too specific and please feel free to ignore it if it’s too specific but I’d be interested to hear specifically your thoughts on this because you also mentioned the elevation of authoritative content in the election context but now there is also the specific rule on media privileges so this privilege of journalistic content on platform content moderation which is also very contested and discussed and it will now be I think applicable as of August and at least from the platforms that we spoke with they don’t really know how to do it yet so I wonder if if you have already prepared for that if you already have some some approach of of how to to approach it from a regulatory and regular regulatory body and enforcement side.


John Evans: Thanks. Okay yeah the second one is quite specific but I’d be happy to talk to you we’re getting ready as well so but I’d be happy to talk to you afterwards and I have a colleague here Paul who’s kind of involved with the on some of the new legislation coming down to track on the democracy side so I’ll be happy to chat. But on the skills do we have or does the policy side have the the requisite skills to to carry out the mandate okay and with new recommendations new approaches being proposed all the time. Sometimes it is hard to keep up it’s it’s it’s if you cast your mind back 15-20 years and the approach to the regulation of the internet was let’s let’s keep our hands off it for the moment let’s see how it develops Gradually, problems started to emerge very early. It was perceived to be around concentration issues, so competition policy was seen to be maybe an appropriate measure, consumer protection measures to a degree, but it became apparent over a number of years that there was certain characteristics of the platform economy that were unique and were driving different dynamics that the regulatory systems were not capable of handling effectively. So enter Digital Services Act and the Digital Markets Act in the late teens, and the Digital Services Act and the Digital Markets Act, they’re kind of twins if you like, were trying to address the emerging harms that had become, that were becoming apparent. Both of those pieces of legislation were pushed through in really quite a, in a speedy way. It’s kind of, European legislation takes time to develop and emerge and I think they stand out as being, as having been done quite quickly. But also I think it’s recognized that they are a first step in developing a comprehensive, efficient, streamlined regulatory process. So you always have this tension between trying to work within the system that you have and people at the same time recommending actually that’s not going to work well, you need to try this, you need to, I think we need to try what we have first, see what works, learn from it and develop new things as we mature. But I think the problem is that the harms are perceived as really quite severe and we just don’t have time to wait and see how things mature. If you look back at a different regulatory system, say telecommunications for example, there the framework was developing and evolving over a period of 25 years as competition was embedded in European telecommunications markets. I don’t think we have that same privilege of waiting to see how things develop, we need to move quite quickly in the online safety space, I think. I don’t know if that answers the question, but I’m happy to chat, yeah.


Maria Farrell: Hi, I’m Maria Farrell, I’m a fellow Irish citizen and I have a compliment for you and a question. And the compliment is that amongst other digital and human rights activists around Europe, Camargo na mBan has already a reputation of acting with strength and integrity as a regulator, which has been completely lacking in our data protection regulator and how Ireland deals with tax and the tech co’s. So you guys have completely are changing the narrative on what we can do as a country to actually stand up to our responsibility to regulate these firms that are headquartered in Ireland. My question to you is what are you doing and can you do to ensure that you continue to act with that strength, with that integrity, with that moral courage that says, you know, we are going to stand up to these firms and stand in defence, in active defence of European democracy?


John Evans: I always give a two-part answer to this. The first is that we’re really quite proud of the strategy document that we put together, we think it’s a pretty solid North Star for us. So we were supposed to refresh and renew these things every every two or three years or so. We don’t expect the top our mission and the strategic objectives to change dramatically over the next while. Those are going to be consistent North Stars if you like. So we think we have the strategic direction quite well set and embedded in the organisation and I think we think that will endure. The other side of it is is that just from a resource resource wise, we’re not pushing closed doors in a sense that there is an expectation that we will act and we’re happy to do that, you know, we’re happy to do that. But independent regulation is independent regulation, political context changes, but until somebody changes the law we’re going to enforce the law to the best of our ability.


Audience: I’ll try as best as I can to make it short. I was wondering the DSA being sort of content agnostic, how do you see a role as a regulator in this network of other regulators, especially in relation to member states where there may be rule of law backsliding. So how would you relate as an Irish media commission to member states with perhaps the last strong digital service coordinator or making decisions that you do not agree with from a rule of law perspective.


John Evans: I think part of the protection against that is the role that the European Commission has to play, but also the Digital Services Board. So we get to hold each other to account and but also support each other within that network. And really I think the key pieces, you know, there’s very important articles that the digital services coordinators have a shared responsibility with the European Commission, but the piece around the systemic article 34, article 35, those are really the core strategic pieces, the central planks of the Digital Services Act and I think those are the best protections. We’re happy to chat, yeah, yeah, yeah, can I do another one, yeah? Gosling, yeah, yeah, yes, we’re part of Gosling, yeah, yeah, I kind of described participation in the Digital Services Board as strategic and there’s a kind of operational aspect to that. The Global Online Safety Regulators Network, that’s really excellent in trying to understand different regulatory approaches in different countries and what are best practices, because often Australia’s eSafety Commission is well ahead of us in terms of, kind of, along the regulatory path than we are in certain respects, and there’s a lot for us to learn from that. The Ofcom is a member of that as well and they’re a well-established very expert regulator for whom we have a lot to learn, but also we’re happy to share the European experience in that network as well, yeah, yeah. Okay, I’d better go, sorry, but I’m happy to chat. Thanks.


J

John Evans

Speech speed

143 words per minute

Speech length

4218 words

Speech time

1765 seconds

Ireland’s Role as Digital Services Coordinator and Regulatory Framework

Explanation

Ireland has a disproportionately large responsibility in European digital regulation because many major tech platforms are headquartered there. This means the Irish regulator must handle approximately 80% of all complaints against online platforms across Europe, requiring significant coordination with other regulators and the European Commission.


Evidence

15 of the 25 very large online platforms are based in Ireland; complaints from other EU countries are transmitted to the Irish regulator; Ireland expanded from 40 to over 200 staff with plans to reach 300; the Digital Services Act creates a network approach with Ireland as Digital Services Coordinator


Major discussion point

Ireland’s unique position and responsibility in European digital regulation


Topics

Legal and regulatory | Jurisdiction | Data governance


Electoral Integrity and Democracy Protection

Explanation

Digital Services Coordinators across Europe developed comprehensive guidelines and toolkits to protect electoral integrity during the ‘year of elections.’ These measures require platforms to take specific actions like elevating official information sources, limiting disinformation spread, and building internal teams capable of addressing national elections.


Evidence

Ireland had European parliamentary elections, referendum, local elections, and general election; Digital Services Board approved election guidelines; platforms must elevate official sources, demonetize disinformation, label political advertising; pre-election workshops and scenario planning conducted with Electoral Commission, platforms, fact-checkers, and police


Major discussion point

Coordinated European approach to protecting democratic processes online


Topics

Sociocultural | Content policy | Human rights | Freedom of expression


Agreed with

– Audience

Agreed on

Gap between policy development and regulatory implementation


Child Protection and Online Safety

Explanation

Ireland employs a dual approach to child protection online, addressing both harmful content through specific safety codes and systemic issues through the Digital Services Act. This comprehensive strategy includes both regulatory enforcement and educational initiatives to protect minors from various online harms.


Evidence

Online safety code prohibits content promoting self-harm, suicide, eating disorders, and cyberbullying; requires age assurance for pornographic content; Digital Services Act Article 28 requires platforms to protect safety, security and privacy of minors; coordinated enforcement actions against adult sites; educational resources distributed to primary schools; cinema advertising campaigns planned


Major discussion point

Comprehensive approach to protecting children online through regulation and education


Topics

Human rights | Children rights | Cybersecurity | Child safety online


Regulatory Challenges and Resource Allocation

Explanation

The regulator uses a risk-based approach to prioritize enforcement efforts, considering factors like platform reach, user demographics, and past enforcement history. Unlike traditional regulatory sectors that had decades to develop, online safety regulation must move quickly due to the severity of emerging harms.


Evidence

Five high-level areas of harm identified; risk assessment considers platform characteristics like number of young users and takedown orders issued by authorities; comparison to telecommunications regulation which developed over 25 years; over half of 200+ staff support online safety work


Major discussion point

Balancing limited resources against urgent need for effective platform regulation


Topics

Legal and regulatory | Data governance | Consumer protection


Agreed with

– Audience

Agreed on

Gap between policy development and regulatory implementation


Ireland’s Regulatory Reputation and Independence

Explanation

The regulator maintains independence through consistent strategic direction and strong organizational mission that serves as a North Star regardless of political changes. The commitment is to enforce existing laws to the best of their ability until laws are changed through proper channels.


Evidence

Strategic document serves as North Star; mission and strategic objectives expected to remain consistent; political support consistently provided when requested; independent regulation means enforcing law regardless of political context


Major discussion point

Maintaining regulatory independence and integrity in politically sensitive tech regulation


Topics

Legal and regulatory | Human rights principles | Jurisdiction


Cross-Border Regulatory Coordination

Explanation

Protection against weak regulation in some member states comes through the shared responsibility structure of the Digital Services Act, where the European Commission and Digital Services Board provide mutual accountability. International networks facilitate sharing of regulatory best practices across different jurisdictions.


Evidence

European Commission serves as ‘team captain’ in network approach; Digital Services Board enables regulators to hold each other accountable; Articles 34 and 35 are core strategic pieces of DSA; Global Online Safety Regulators Network shares best practices; Australia’s eSafety Commission and UK’s Ofcom provide regulatory expertise


Major discussion point

Ensuring consistent regulatory standards across jurisdictions with varying capabilities


Topics

Legal and regulatory | Jurisdiction | Human rights principles


Agreed with

– Audience

Agreed on

Importance of cross-border regulatory coordination


A

Audience

Speech speed

164 words per minute

Speech length

382 words

Speech time

139 seconds

Electoral Integrity and Democracy Protection

Explanation

There are concerns about the gap between policy recommendations and practical enforceability in digital regulation. Many policy discussions propose interventions that may not be legally or technically implementable, raising questions about whether regulatory expertise is adequately considered in policy development.


Evidence

OECD policy discussions often feature calls for interventions that are not necessarily enforceable; concerns raised about implementability from legal and regulatory perspectives; specific mention of media privileges rule for journalistic content that platforms don’t know how to implement


Major discussion point

Gap between policy aspirations and regulatory implementation capabilities


Topics

Legal and regulatory | Jurisdiction | Human rights | Freedom of the press


Agreed with

– John Evans

Agreed on

Gap between policy development and regulatory implementation


Regulatory Challenges and Resource Allocation

Explanation

Questions arise about whether regulators have sufficient resources and capacity to handle the enormous scope of platform regulation, especially given Ireland’s responsibility for regulating on behalf of all EU citizens. There’s concern about the regulator’s ability to make appropriate prioritization choices given limited resources.


Evidence

Recognition of Ireland’s huge task and obligations; acknowledgment that ‘we kind of count on you to take on the battle against the platforms for the rest of us in the EU’; questions about resource adequacy and prioritization policies


Major discussion point

Adequacy of regulatory resources for the scale of platform oversight needed


Topics

Legal and regulatory | Data governance | Consumer protection


Agreed with

– John Evans

Agreed on

Gap between policy development and regulatory implementation


Cross-Border Regulatory Coordination

Explanation

Concerns exist about how to maintain effective coordination when some member states may have weak digital service coordinators or experience rule of law backsliding. This raises questions about the integrity of the network approach when some nodes in the network may be compromised.


Evidence

Specific concern about member states with rule of law backsliding; questions about relating to member states with weak digital service coordinators; concerns about disagreeing with decisions from a rule of law perspective


Major discussion point

Maintaining regulatory network integrity when some member states have governance challenges


Topics

Legal and regulatory | Jurisdiction | Human rights principles


Agreed with

– John Evans

Agreed on

Importance of cross-border regulatory coordination


M

Maria Farrell

Speech speed

176 words per minute

Speech length

155 words

Speech time

52 seconds

Ireland’s Regulatory Reputation and Independence

Explanation

Ireland’s media regulator has earned recognition among European digital and human rights activists for demonstrating strength and integrity in platform regulation. This represents a significant departure from Ireland’s previous reputation regarding tech company regulation, particularly in data protection and taxation areas.


Evidence

Reputation among digital and human rights activists across Europe for acting with strength and integrity; contrast with criticism of Ireland’s data protection regulator and tax treatment of tech companies; recognition that the regulator is ‘changing the narrative on what we can do as a country’


Major discussion point

Ireland’s transformation from regulatory haven to responsible platform oversight


Topics

Legal and regulatory | Human rights principles | Privacy and data protection


Agreements

Agreement points

Resource adequacy and prioritization challenges in digital regulation

Speakers

– John Evans
– Audience

Arguments

Regulatory Challenges and Resource Allocation


Regulatory Challenges and Resource Allocation


Summary

Both acknowledge the enormous scope and complexity of platform regulation, with limited resources requiring careful prioritization. There’s recognition that Ireland faces a disproportionate responsibility for EU-wide platform oversight.


Topics

Legal and regulatory | Data governance | Consumer protection


Importance of cross-border regulatory coordination

Speakers

– John Evans
– Audience

Arguments

Cross-Border Regulatory Coordination


Cross-Border Regulatory Coordination


Summary

Both recognize the critical need for effective coordination between regulators across jurisdictions, though they acknowledge challenges when some member states may have weaker regulatory capacity or governance issues.


Topics

Legal and regulatory | Jurisdiction | Human rights principles


Gap between policy development and regulatory implementation

Speakers

– John Evans
– Audience

Arguments

Regulatory Challenges and Resource Allocation


Electoral Integrity and Democracy Protection


Summary

Both acknowledge the tension between policy aspirations and practical enforceability, with John Evans noting the need to move quickly despite not having the luxury of gradual development like telecommunications regulation, while audience members raise concerns about implementability of policy recommendations.


Topics

Legal and regulatory | Jurisdiction | Human rights


Similar viewpoints

Both recognize and emphasize Ireland’s transformation into a regulator that acts with strength and integrity, representing a significant departure from previous approaches to tech company oversight in Ireland.

Speakers

– John Evans
– Maria Farrell

Arguments

Ireland’s Regulatory Reputation and Independence


Ireland’s Regulatory Reputation and Independence


Topics

Legal and regulatory | Human rights principles | Privacy and data protection


Both acknowledge the complexity of protecting democratic processes online and the challenges of implementing media privileges and content moderation policies, though they approach from different perspectives of implementation versus policy development.

Speakers

– John Evans
– Audience

Arguments

Electoral Integrity and Democracy Protection


Electoral Integrity and Democracy Protection


Topics

Sociocultural | Content policy | Human rights | Freedom of expression | Freedom of the press


Unexpected consensus

Ireland’s regulatory transformation and credibility

Speakers

– John Evans
– Maria Farrell

Arguments

Ireland’s Regulatory Reputation and Independence


Ireland’s Regulatory Reputation and Independence


Explanation

It’s unexpected to see such strong consensus between a regulator and an activist about the regulator’s performance. Maria Farrell’s explicit praise for the regulator’s strength and integrity, contrasted with criticism of other Irish regulatory bodies, suggests genuine recognition of effective regulatory action rather than typical regulatory capture or weakness.


Topics

Legal and regulatory | Human rights principles | Privacy and data protection


Urgency of regulatory action despite implementation challenges

Speakers

– John Evans
– Audience

Arguments

Regulatory Challenges and Resource Allocation


Electoral Integrity and Democracy Protection


Explanation

Despite acknowledging significant implementation challenges and resource constraints, there’s consensus that waiting for perfect solutions is not an option due to the severity of emerging harms. This represents agreement on the need for imperfect but immediate action over delayed comprehensive solutions.


Topics

Legal and regulatory | Human rights | Data governance


Overall assessment

Summary

The discussion reveals strong consensus on the fundamental challenges facing digital regulation: resource constraints, implementation complexity, and the need for cross-border coordination. There’s also unexpected agreement on Ireland’s regulatory transformation and the urgency of action despite imperfect tools.


Consensus level

High level of consensus on challenges and approach, with constructive dialogue rather than adversarial positions. This suggests a mature understanding of regulatory realities and shared commitment to effective platform oversight, which bodes well for continued cooperation and development of regulatory frameworks.


Differences

Different viewpoints

Policy Development vs. Regulatory Implementation Gap

Speakers

– John Evans
– Audience

Arguments

Regulatory Challenges and Resource Allocation – The regulator uses a risk-based approach to prioritize enforcement efforts, considering factors like platform reach, user demographics, and past enforcement history. Unlike traditional regulatory sectors that had decades to develop, online safety regulation must move quickly due to the severity of emerging harms.


Electoral Integrity and Democracy Protection – There are concerns about the gap between policy recommendations and practical enforceability in digital regulation. Many policy discussions propose interventions that may not be legally or technically implementable, raising questions about whether regulatory expertise is adequately considered in policy development.


Summary

John Evans advocates for working within existing regulatory frameworks first and learning from implementation, while the audience member argues that policy development often proposes unenforceable interventions without adequate consideration of regulatory expertise and practical implementation challenges.


Topics

Legal and regulatory | Human rights | Freedom of the press


Unexpected differences

Regulatory Timeline and Urgency

Speakers

– John Evans
– Audience

Arguments

Regulatory Challenges and Resource Allocation – Unlike traditional regulatory sectors that had decades to develop, online safety regulation must move quickly due to the severity of emerging harms.


Electoral Integrity and Democracy Protection – There are concerns about the gap between policy recommendations and practical enforceability in digital regulation.


Explanation

While both parties acknowledge the urgency of digital regulation, they have opposing views on how to balance speed with effectiveness. Evans argues for rapid implementation despite imperfections, while the audience suggests that rushing may lead to unenforceable policies. This disagreement is unexpected because both parties want effective regulation but fundamentally differ on the risk-reward calculation of moving quickly versus ensuring implementability.


Topics

Legal and regulatory | Human rights | Jurisdiction


Overall assessment

Summary

The main areas of disagreement center on the balance between policy ambition and regulatory practicality, resource adequacy for the scale of platform oversight, and the effectiveness of current cross-border coordination mechanisms.


Disagreement level

Moderate disagreement with significant implications. While speakers share common goals of effective platform regulation and protection of democratic values, their different perspectives on implementation approaches could lead to tensions between policy development and regulatory execution. The disagreements suggest a need for better integration between policy-making and regulatory expertise to ensure that ambitious digital governance goals are matched with practical enforcement capabilities.


Partial agreements

Partial agreements

Similar viewpoints

Both recognize and emphasize Ireland’s transformation into a regulator that acts with strength and integrity, representing a significant departure from previous approaches to tech company oversight in Ireland.

Speakers

– John Evans
– Maria Farrell

Arguments

Ireland’s Regulatory Reputation and Independence


Ireland’s Regulatory Reputation and Independence


Topics

Legal and regulatory | Human rights principles | Privacy and data protection


Both acknowledge the complexity of protecting democratic processes online and the challenges of implementing media privileges and content moderation policies, though they approach from different perspectives of implementation versus policy development.

Speakers

– John Evans
– Audience

Arguments

Electoral Integrity and Democracy Protection


Electoral Integrity and Democracy Protection


Topics

Sociocultural | Content policy | Human rights | Freedom of expression | Freedom of the press


Takeaways

Key takeaways

Ireland serves as a critical hub for European digital regulation, handling approximately 80% of complaints against online platforms due to many major tech companies being headquartered there


The Digital Services Act creates an effective network-based regulatory approach requiring coordination between member states and the European Commission, moving beyond failed self-regulatory models


Ireland has demonstrated serious commitment to digital regulation by expanding from 40 to over 200 staff members, with plans to reach 300, showing that adequate resourcing is possible when there is political will


A two-dimensional approach to child protection (addressing both content through safety codes and systems through the DSA) provides a comprehensive framework for protecting minors online


Electoral integrity requires coordinated cross-border regulatory response, with tools like election guidelines, candidate support packs, and pre-election scenario planning proving effective


Ireland’s media regulator has established a reputation for acting with strength and integrity, contrasting positively with other Irish regulators’ handling of tech companies


Risk-based prioritization considering platform reach, user demographics, and enforcement history across Europe is essential for effective resource allocation


The urgency of online harms means regulators cannot wait decades for frameworks to mature as was possible with telecommunications regulation


Resolutions and action items

Ireland will continue developing risk-based prioritization mechanisms to focus regulatory efforts more precisely on high-harm areas


Coordinated enforcement actions against adult sites below the 45 million user threshold will be pursued by Digital Services Coordinators across member states


Ireland plans to extend educational resources to primary age children and run cinema-based awareness campaigns for parents during summer


Research will be conducted to evaluate the effectiveness of candidate support packs provided during elections


The regulator committed to ongoing participation in international networks like the Global Online Safety Regulators Network to share best practices


Unresolved issues

How to effectively coordinate with member states that may have weak digital service coordinators or rule of law backsliding issues


The challenge of ensuring policy recommendations are actually enforceable and implementable from a legal/regulatory perspective


Implementation details for the new media privilege rules for journalistic content in platform content moderation, which platforms don’t yet know how to execute


Long-term sustainability of regulatory independence and integrity as political contexts change


Whether current regulatory frameworks will prove sufficient or if additional legislative measures will be needed as harms evolve


How to balance the need for quick action on severe harms with the time required for regulatory frameworks to mature and prove effective


Suggested compromises

Using existing Digital Services Act and Digital Markets Act frameworks as a first step while learning and developing new approaches, rather than waiting for perfect solutions


Leveraging the European Commission and Digital Services Board as protective mechanisms against potential regulatory capture or weakness in individual member states


Combining enforcement actions with educational initiatives and media literacy efforts rather than relying solely on punitive measures


Accepting that regulatory frameworks will need to evolve iteratively rather than expecting comprehensive solutions immediately, while still acting urgently on severe harms


Thought provoking comments

How do you make your choices? Do you have enough resources? What is your policy on prioritising with the resources that you have given all the challenges that there are?

Speaker

Audience member (first questioner)


Reason

This question cuts to the heart of regulatory effectiveness by addressing the fundamental challenge of resource allocation in digital regulation. It acknowledges the enormous scope of the regulator’s mandate while recognizing the practical limitations that could undermine their effectiveness.


Impact

This question shifted the discussion from theoretical regulatory frameworks to practical implementation challenges. It prompted Evans to reveal concrete details about organizational growth (from 40 to 200+ people), resource allocation strategies, and the collaborative nature of European digital regulation. It also led him to discuss their risk-based prioritization approach, adding depth to understanding how modern digital regulation actually works in practice.


Very often in policy discussions there are calls for interventions that are maybe not necessarily enforceable… do you feel that you with your expertise and specific background knowledge on how complex these issues are that this is also taken up on the other side of the spectrum in the policy and regulatory development?

Speaker

OECD policy worker


Reason

This comment highlights a critical disconnect between policy aspirations and regulatory reality – the gap between what policymakers want to achieve and what regulators can actually enforce. It introduces the concept of implementability as a key constraint in digital governance.


Impact

This question prompted Evans to provide historical context about internet regulation evolution, explaining how the ‘hands-off’ approach gradually gave way to targeted legislation like the DSA and DMA. It led to a deeper discussion about the tension between the urgency of addressing digital harms and the time needed to develop mature regulatory frameworks, comparing it to the 25-year evolution of telecommunications regulation.


Amongst other digital and human rights activists around Europe, Coimisiún na Meán has already a reputation of acting with strength and integrity as a regulator, which has been completely lacking in our data protection regulator and how Ireland deals with tax and the tech cos… what are you doing and can you do to ensure that you continue to act with that strength, with that integrity, with that moral courage?

Speaker

Maria Farrell


Reason

This comment is particularly insightful because it directly addresses Ireland’s controversial reputation as a ‘regulatory haven’ for tech companies while acknowledging a positive counter-narrative. It raises the fundamental question of regulatory capture and independence in a jurisdiction where major tech companies are headquartered.


Impact

This comment created a moment of validation for the regulator while simultaneously challenging them to maintain their independence. It shifted the conversation toward questions of institutional integrity and political pressure. Evans’ response about having a ‘North Star’ strategy and political support revealed important insights about how regulatory independence can be maintained even in challenging political-economic contexts.


How would you relate as an Irish media commission to member states with perhaps the last strong digital service coordinator or making decisions that you do not agree with from a rule of law perspective?

Speaker

Audience member (final questioner)


Reason

This question introduces the complex geopolitical dimension of digital regulation within the EU, specifically addressing how democratic backsliding in some member states could affect the coordinated regulatory approach that the DSA depends upon. It highlights potential systemic vulnerabilities in the network-based regulatory model.


Impact

While Evans’ response was brief, this question opened up discussion of the safeguards built into the DSA framework, particularly the role of the European Commission and Digital Services Board in maintaining standards across member states. It highlighted the tension between national sovereignty in regulation and the need for consistent enforcement of digital rights across the EU.


Overall assessment

These key comments transformed what could have been a straightforward regulatory presentation into a nuanced exploration of the practical, political, and systemic challenges facing digital governance. The questions moved the discussion from describing regulatory frameworks to examining their real-world implementation challenges, resource constraints, political pressures, and systemic vulnerabilities. Maria Farrell’s comment was particularly impactful in acknowledging Ireland’s unique position and the regulator’s emerging reputation, while the OECD questioner’s focus on enforceability highlighted the gap between policy ambition and regulatory reality. Together, these interventions created a more honest and comprehensive picture of digital regulation as an evolving, resource-constrained, and politically complex endeavor rather than a purely technical exercise.


Follow-up questions

How effective was the candidate pack initiative in supporting politicians’ safe participation in public life during elections?

Speaker

John Evans


Explanation

John Evans mentioned they are conducting research to find out exactly how the candidate pack helped politicians when targeted online, indicating this is an ongoing area of investigation to measure impact and improve future initiatives


How will platforms implement Article 28 of the Digital Services Act regarding protection of minors’ safety, security and privacy?

Speaker

John Evans


Explanation

John Evans noted that guidance from the European Commission on implementing this article will emerge later in the year, suggesting this is an area requiring further clarification and research on practical implementation


How to approach the new media privileges rule for journalistic content on platform content moderation from a regulatory enforcement perspective?

Speaker

OECD audience member


Explanation

The audience member noted that platforms don’t know how to implement this rule yet and asked about the regulatory body’s preparedness, indicating this is an area requiring further research and policy development


How can policy discussions better integrate regulatory expertise to ensure proposed interventions are actually enforceable?

Speaker

OECD audience member


Explanation

The audience member raised concerns about policy calls for interventions that may not be legally or regulatorily enforceable, suggesting need for better interlinkage between policy development and regulatory expertise


How can regulators maintain strength and integrity in the face of changing political contexts while ensuring independent regulation?

Speaker

Maria Farrell


Explanation

This question addresses the critical challenge of maintaining regulatory independence and moral courage over time, which is essential for effective platform regulation


How should digital services coordinators handle situations involving member states with rule of law backsliding?

Speaker

Audience member


Explanation

This question addresses potential conflicts within the European regulatory network when some member states may have compromised rule of law standards, requiring research into governance mechanisms and accountability measures


What are the best practices and different regulatory approaches being tested globally for online safety regulation?

Speaker

John Evans


Explanation

John Evans mentioned the value of learning from other regulators like Australia’s eSafety Commission and Ofcom through the Global Online Safety Regulators Network, indicating ongoing research into comparative regulatory approaches


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Protection of Subsea Communication Cables

Protection of Subsea Communication Cables

Session at a glance

Summary

This discussion at the Internet Governance Forum focused on the security and resilience of subsea telecommunication cables, which carry over 99% of global intercontinental data traffic. The session was co-hosted by UNIDIR and the Norwegian government, bringing together ministers, industry experts, and international organizations to address growing threats to this critical infrastructure.


Ministers from Norway, Finland, Nigeria, and Estonia highlighted how recent incidents, particularly in the Baltic Sea, have demonstrated the vulnerability of subsea cables to both accidental damage and intentional sabotage. They emphasized that geopolitical tensions have significantly increased risks, with incidents involving Russia’s shadow fleet cutting cables in European waters. The speakers stressed that protecting subsea cables requires comprehensive international cooperation, as these systems cross national borders and operate in international waters.


Industry representatives shared practical experiences, including a detailed account of a cable cut between Latvia and Sweden that took 28 days to repair despite good preparation. They discussed emerging technologies like distributed acoustic sensing (DAS) that can detect threats up to two kilometers away and provide real-time monitoring of underwater activities. The panelists emphasized that fiber optic cables can essentially function as massive underwater sensor networks.


Key themes throughout the discussion included the importance of public-private partnerships, the need for redundancy and route diversity, and the critical role of preparedness and crisis management planning. Speakers highlighted initiatives like the ITU’s International Advisory Body for Submarine Cable Resilience and regional cooperation agreements in the North Sea and Baltic Sea. The discussion concluded that strengthening subsea cable resilience is a “team sport” requiring coordinated efforts across governments, industry, and international organizations, with resilience built by design rather than as an afterthought.


Keypoints

## Major Discussion Points:


– **Critical Infrastructure Vulnerability**: Subsea telecommunication cables carry over 99% of global intercontinental data traffic, making them essential digital lifelines that remain largely “out of sight and out of mind” despite their critical importance to the global digital economy, healthcare, education, and financial systems.


– **Evolving Threat Landscape**: The security environment has fundamentally changed, with incidents in the Baltic Sea, North Sea, and other regions showing a dramatic increase in both accidental and intentional damage to cables, including activities by “shadow fleets” and state-backed interference amid growing geopolitical tensions.


– **Multi-stakeholder Cooperation as Essential**: Effective protection requires coordinated efforts between governments, private industry, international organizations, and civil society, with emphasis on public-private partnerships, regional cooperation agreements (like those in the North Sea and Baltic Sea), and international bodies like the ITU Advisory Body for Submarine Cable Resilience.


– **Resilience by Design, Not Response**: Protection must be intentional and built into systems from the planning stage, incorporating redundancy, route diversity, advanced monitoring technologies (like distributed acoustic sensing), rapid repair capabilities, and comprehensive preparedness including crisis management protocols and regular exercises.


– **Regulatory and Legal Framework Gaps**: There’s a need for updated international cooperation mechanisms, streamlined permitting processes for repairs, clarification of roles between civil and defense authorities, and better implementation of existing legal tools under international law rather than creating entirely new regulatory structures.


## Overall Purpose:


The discussion aimed to raise awareness about the critical vulnerability of subsea cable infrastructure and foster international cooperation to strengthen protection and resilience measures. The session sought to move beyond mere conversation to serve as “a call for governments, industry, and the wider multi-stakeholder community to come together and exchange best practices, strengthen cooperation, and build resilience.”


## Overall Tone:


The discussion maintained a consistently serious and urgent tone throughout, reflecting the critical nature of the infrastructure being discussed. Speakers emphasized that “the risks are no longer hypothetical” and stressed the need for immediate action. While collaborative and constructive, there was an underlying sense of urgency driven by recent incidents and the recognition that current threats are both increasing and evolving. The tone remained professional and solution-oriented, with participants sharing concrete experiences and actionable recommendations rather than engaging in abstract theoretical discussions.


Speakers

**Speakers from the provided list:**


– **Giacomo Persi Paoli** – Head of the Security and Technology Program at the United Nations Institute for Disarmament Research (UNIDIR), session moderator and co-host


– **Karianne Tung** – Minister of Digitalisation and Public Governance of Norway


– **Jarno Syrjala** – Under-Secretary of State for International Trade, Finland


– **Bosun Tijani** – Minister of Communications, Innovation and Digital Economy of Nigeria, co-chair of the ITU International Advisory Body for Submarine Cable Resilience


– **Liisa-Ly Pakosta** – Minister of Justice and Digital Affairs of Estonia


– **Camino Kavanagh** – Expert and research fellow from UNIDIR


– **Steinar Bjornstad** – Strategic competence and research manager at TAMPNET (offshore telecom service provider)


– **Evijs Taube** – Member of the management board from Latvia State Radio and Television Center


– **Sandra Maximiano** – Chair of the board of directors of ANACOM (National Regulatory Authority for Communications in Portugal), co-chair of the ITU International Advisory Body for Submarine Cable Resilience


– **Kent Bressie** – Legal advisor for International Cable Protection Committee (ICPC), participating remotely


– **Session video** – Video content/narrator (not a human speaker)


**Additional speakers:**


None identified beyond the provided speakers names list.


Full session report

# Comprehensive Report: Strengthening the Security and Resilience of Subsea Telecommunication Cables


## Executive Summary


This Internet Governance Forum session, co-hosted by the United Nations Institute for Disarmament Research (UNIDIR) and the Norwegian government, brought together ministers, industry experts, and international organisations to address the critical vulnerability of subsea telecommunication cables. The discussion emphasised that these cables, which carry over 99% of global intercontinental data traffic, represent essential digital lifelines that remain largely “out of sight and out of mind” despite their fundamental importance to modern society.


The session established a four-pillar framework for cable resilience: protection, planning, preparedness, and response. Speakers emphasised that “the risks are no longer hypothetical” and stressed the need for immediate action, driven by recent incidents and the recognition that current threats are both increasing and evolving. Key outcomes included the establishment of ITU Advisory Body working groups for 2025-26, regional cooperation agreements in the North Sea and Baltic Sea, and concrete national commitments to enhanced protection frameworks.


The discussion remained professional and solution-oriented, with participants sharing concrete experiences and actionable recommendations, ultimately concluding that cable protection is fundamentally “a team sport” requiring coordinated efforts across governments, industry, and international organisations.


## Critical Infrastructure Vulnerability and Societal Dependence


The discussion began with a stark assessment of society’s complete dependence on subsea cables. Giacomo Persi Paoli, Head of the Security and Technology Program at UNIDIR and session moderator, established that subsea cables carry over 99% of global intercontinental data traffic, making them critical digital infrastructure that underpins the global economy.


Karianne Tung, Norway’s Minister of Digitalisation and Public Governance, emphasised that digital society is completely dependent on submarine cables for healthcare, education, and transport systems. This dependency was further illustrated by Liisa-Ly Pakosta, Estonia’s Minister of Justice and Digital Affairs, who explained that as a fully digital state, Estonia faces actual threats to government services when cables are cut.


The human impact of cable failures was powerfully articulated by Bosun Tijani, Nigeria’s Minister of Communications, Innovation and Digital Economy, who shared his personal experience during West African cable cuts last year in March: “I was surprised at how, of course, we were fortunate, the private sector came together. But as a minister, I didn’t have any answer to give to people. And people don’t often complain about companies when you have natural disasters. It’s the government that they look to for answer.” This comment fundamentally reframed the discussion from technical protection measures to governance accountability, highlighting how governments are held accountable for infrastructure failures regardless of ownership structures.


## Evolving Threat Landscape and Geopolitical Context


The discussion revealed a fundamental shift in the security environment surrounding subsea cables. Tung highlighted recent incidents with damages to subsea cables in the Baltic Sea and Red Sea, demonstrating increased vulnerability in critical maritime regions. This assessment was reinforced by Pakosta, who provided stark geopolitical context: “Let us remember that it was 1884 when the Paris Convention of Undersea Telegraphic Cables was agreed… So this is actually the situation where we are just now, as well, within the broader geopolitical situation. What we see around this area… that the Russian shadow fleet is cutting down our connections.”


Pakosta noted a dramatic rise in “accidents” during the full-scale war in Ukraine, with intentional cable cutting by Russian shadow fleet vessels, connecting current geopolitical tensions to a 140-year pattern of intentional cable disruption during conflicts.


Camino Kavanagh, an expert and research fellow from UNIDIR, provided crucial empirical context with historical data analysis. She referenced how in 1881, a group of countries concerned about damage to cables in the North Sea raised the issue in pre-negotiations to the 1884 Convention, and in 1882, a specific government brought statistics showing “60% of damage caused by natural events, 35% by unintentional acts due to accidents at sea or force majeure, and 5% due to gross negligence and some malign activities.” Moving forward 143 years later, she noted that whilst these statistics haven’t changed dramatically, intentional threats are increasing, though it remains “very hard to ascertain responsibility for some of the incidents.”


Jarno Syrjala, Finland’s Under-Secretary of State for International Trade, emphasised that geopolitical tensions have fundamentally changed the security environment with significant implications for digital infrastructure safety. He stressed the need for urgency in developing innovative technological solutions, noting that different regions experience vastly different threat landscapes and problem sets.


## Multi-Stakeholder Cooperation and Governance


A central theme throughout the discussion was the absolute necessity of international cooperation and public-private partnerships for effective subsea cable protection. Tung articulated this clearly: “Cross-border cooperation is crucial since submarine cables cross national borders and international waters.” She provided concrete examples of successful regional cooperation, including North Sea cooperation agreements from 2024 and Baltic Sea cooperation agreements established in May 2025 with Denmark, Estonia, Finland, Germany, Iceland, Latvia, Lithuania, Poland, Sweden, the EU, and Norway for protection of critical subsea infrastructure.


Syrjala reinforced this theme, explaining how international cooperation through NATO, the European Union, and the International Telecommunication Union (ITU) helps build resilience and response capabilities. He advocated for the multi-stakeholder community to have a more prominent role in submarine cable resilience discussions, emphasising that solid public-private partnership represents one of the most important aspects of telecommunications resilience.


Sandra Maximiano, co-chair of the ITU International Advisory Body for Submarine Cable Resilience and Chair of the board of directors of ANACOM (Portugal’s National Regulatory Authority for Communications), highlighted how the ITU Advisory Body provides a global platform for collaboration between public and private sectors. She noted that the body, co-chaired with Bosun Tijani, has established three working groups for 2025-26 focusing on resilience by design, timely deployment and repair, and risk identification and monitoring.


Kent Bressie, legal advisor for the International Cable Protection Committee (ICPC) participating remotely, provided industry perspective on this cooperation. He noted that the ICPC, founded in 1958 with more than 240 members from approximately 75 countries, emphasises the “need for better awareness and communication between submarine cable operators, marine industries, and governments. Governments need to understand what industry does and recognise actions only governments can take.”


This theme of complementary capabilities was reinforced throughout the discussion, with speakers acknowledging that whilst private industry owns and operates most cables, only governments can take certain political and military responses to threats. The challenge lies in creating effective coordination mechanisms that leverage both sectors’ strengths whilst maintaining clear accountability structures.


## Technical Innovation and Resilience by Design


The discussion highlighted significant technological advances in cable monitoring and threat detection alongside the fundamental principle of building resilience into systems from the beginning. Steinar Bjornstad, Strategic competence and research manager at TAMPNET (an offshore telecom service provider), explained how fiber sensing technology allows cables to work as underwater microphones, detecting approaching threats: “Combining multiple monitoring tools including AIS information and fiber sensing provides comprehensive situational awareness.”


The technological capability was further elaborated through distributed acoustic sensing, which turns fiber optic cables into virtual hydrophones for ocean monitoring. Light pulses are injected into fiber cables, and backscattered light reveals acoustic pressure fields, essentially making “the ocean transparent” for monitoring purposes. The technology can detect threats like two kilometers away and provide real-time monitoring of underwater activities.


Evijs Taube, a member of the management board from Latvia State Radio and Television Center, introduced a paradigm shift in thinking about cables: “Every cable, existing cable is a big asset, and we can call it a big sensor. If we install… distributed or centralised… integrated system of such sensors in a particular area, for example, Baltic Sea… that would give a big benefit, not only protecting the cables, but to understand what is going on under the water.”


This comment transformed the discussion from viewing cables as passive infrastructure requiring protection to active sensing networks that could provide comprehensive underwater surveillance, turning the infrastructure itself into a security solution.


A fundamental principle that emerged was “resilience by design.” Tijani articulated this powerfully: “Resilience should be intentional. It shouldn’t be something that is afterthought.” This philosophy emphasised that protection must be built into systems from the planning stage rather than added reactively.


Bjornstad explained how multiple cables and optical switching enable quick traffic rerouting when cables fail, demonstrating the importance of redundancy in system design. Maximiano reinforced this by advocating for building redundancy through multiple geographically diverse cable routes and avoiding strategic choke points. Tijani emphasised that countries need multiple access points to cables rather than single cable connections, noting that resilience requires calculated investment in infrastructure diversity.


## Response Preparedness and Crisis Management


The practical challenges of cable repair and crisis response emerged as critical concerns throughout the discussion. Bjornstad explained that repair alliance membership ensures cable repair within a couple of weeks when incidents occur, but this requires significant advance preparation and investment.


Taube provided a detailed case study of a successful repair between Latvia and Sweden, explaining: “we tried three times to recover, the third time was successful, and despite of February being a short month, we managed to fix it within a month, so 28 days.” This demonstrated the importance of preparation, spare parts, and standby vessel agreements, while highlighting that even with good preparation, repairs can take nearly a month, creating extended vulnerability periods.


Effective crisis management emerged as a crucial component requiring clear crisis management teams, communication channels with partners, and public communication strategies. Taube emphasised the importance of established communication lines with international partners and 24/7 contact protocols, noting that preparation through table exercises and drills is essential, though real incidents provide irreplaceable learning experiences.


Maximiano identified the need for collective mechanisms to support repair capacity, especially for regions lacking resources. She noted that small island states and remote regions need special attention where economic incentives for response are lower. The discussion revealed that limited repair ships and talent for cable maintenance require calculated investment and regional cooperation.


Tijani highlighted the workforce development challenges, noting difficulties in attracting young talent to the submarine cable industry. This human resource challenge compounds the technical and logistical difficulties of maintaining adequate repair capacity globally.


## Regulatory Framework Challenges and Best Practices


The regulatory dimension of cable protection revealed several complex challenges. Bressie presented ICPC best practices advocating a holistic approach including default separation distances and single government contact points. However, he also warned that government policies can potentially undermine cable protection through excessive delays and clustering requirements.


A particularly striking example of counterintuitive regulation was Bressie’s observation about cable location transparency: “We see a renewed push by some governments to remove cables from nautical charts. This is woefully misguided. Given that approximately 70 percent of cable damage each year is caused by fishing and anchors, removing cables from nautical charts would significantly increase those risks and make it impossible for cable owners to pursue damages claims.”


This comment challenged security-through-obscurity thinking, demonstrating how transparency actually enhances protection by enabling avoidance of accidental damage. It highlighted the need for evidence-based security measures rather than intuitive but potentially counterproductive approaches.


Maximiano noted that regulation needs to keep pace with technological innovation and high-capacity connectivity demands, particularly as artificial intelligence applications require massive computational capacity that depends on robust cable infrastructure. Syrjala noted that Finland had already transposed the NIS2 directive into national law in April 2025 with comprehensive telecommunications resilience requirements.


## Implementation and Future Challenges


The discussion produced several concrete commitments and initiatives. The ITU Advisory Body for Submarine Cable Resilience established three working groups for 2025-26 focusing on resilience by design, timely deployment and repair, and risk identification and monitoring. The Abuja Declaration was approved in February 2025 as a milestone for international cooperation on submarine cable resilience.


Multiple countries committed to implementing the EU Action Plan on Cable Security with four objectives: prevention, detection, response and repair, and deterrence. Several nations signed the New York Declaration on Submarine Cable Security to promote integrity and accessibility.


At the national level, Norway committed to establishing dedicated cooperation between private sector and civil/defence authorities with clarified roles and responsibilities. Nigeria announced plans to set up a dedicated desk within its communications commission for cable protection protocols and international coordination.


Despite these concrete commitments, several significant challenges remain unresolved. Limited repair capacity globally, particularly the shortage of specialised vessels and trained personnel for cable maintenance, represents a critical vulnerability that requires sustained investment and international coordination.


Many developing countries and small island states continue to lack adequate frameworks and expertise for cable protection, creating global vulnerabilities that could affect international connectivity. The economic incentives for prompt response mechanisms in remote regions remain insufficient, requiring innovative financing and cooperation mechanisms.


Technical challenges include the ongoing debate over removing cables from nautical charts, difficulties in attributing responsibility for cable incidents, and regulatory delays that can undermine protection efforts. The workforce development challenge of attracting young talent to the submarine cable industry requires sustained attention from both industry and educational institutions.


## Conclusion and Strategic Implications


The discussion concluded with Giacomo Persi Paoli’s synthesis that strengthening subsea cable resilience is fundamentally “a team sport” requiring coordinated efforts across governments, industry, and international organisations. He emphasised his four-pillar framework of protection, planning, preparedness, and response, noting that “plans are useless unless they are put in practice through concrete measures of preparedness” and stressing the need for “effective, quick response to minimize disruption.”


The session successfully moved beyond abstract discussions to concrete commitments and actionable frameworks, though significant implementation challenges remain. The strong consensus among participants suggests that the subsea cable protection community has developed mature understanding of the challenges and potential solutions.


The session’s emphasis on resilience by design, international cooperation, and public-private partnerships provides a solid foundation for addressing the evolving threats to this critical infrastructure. The combination of historical perspective, current geopolitical realities, and future technological possibilities creates a comprehensive framework for action.


Ultimately, the discussion reinforced that protecting subsea cables is not merely a technical challenge but a fundamental requirement for maintaining global digital connectivity, economic stability, and societal resilience in an increasingly interconnected world. The urgency expressed by all participants reflects the recognition that the time for preparation and action is now, before more serious incidents test the limits of current protection capabilities.


Session transcript

Giacomo Persi Paoli: Excellencies, distinguished delegates, colleagues, ladies and gentlemen, good morning and a warm welcome to this session, whether you’re following us here in the room or online. We’re here today to discuss a topic that is both critical and too often overlooked, the security and resilience of subsea telecommunication cables. This hidden infrastructure carries over 99% of global intercontinental data, silently underpinning every facet of our digital world that we rely on. Yet, despite of their criticality, they remain largely out of sight and too often out of mind. My name is Giacomo Persi-Paoli. I’m the head of the security and technology program at the United Nations Institute for Disarmament Research, UNIDIR, and it is an honor to co-host this session in partnership with the government of Norway as part of this year’s Internet Governance Forum. Recent incidents, whether accidental or deliberate, have underscored how vulnerable these lifelines, these digital lifelines truly are. The growing intersection of geopolitical tensions, malicious cyber capabilities, and infrastructure fragility highlights a stark reality. The risks are no longer hypothetical. They’re here and they’re multiplying. This is why this session aspires to be more than a conversation. It aspires to serve as a call for governments, industry, and the wider multi-stakeholder community to come together and exchange best practices, strengthen cooperation, and build resilience into one of the most vital components of the global digital ecosystem. This session will unfold in two parts. We will begin with a high-level ministerial dialogue offering national perspectives on how countries are approaching the protection of subsea cables. Following that, we will turn to a multi-stakeholder panel of experts who will reflect on the evolving threat landscape and share actionable insights on how to secure subsea cable infrastructure. We are privileged to be joined by an exceptional group of leaders and practitioners from across sectors and regions. Their experience and ideas are vital as we chart a path forward, one that reflects both the complexity of today’s challenges and the spirit of international cooperation that forums like IGF are designed to inspire. And now, without further ado, I have the honour of inviting here on stage Minister Karianne Tung, Minister of Digitalisation and Public Governance of Norway, Jarno Syrjala, Under-Secretary of State for International Trade in Finland, Bosun Tijani, Minister of Communications, Innovation and Digital Economy of Nigeria, and Liisa-Ly Pakosta, Minister of Justice and Digital Affairs of Estonia. Please join me in a round of applause in welcoming them on stage. Thank you once again for taking the time to join us to discuss this very important And I would like to start with you, Minister Tung, and give you the floor and the opportunity to introduce the topic. Please.


Karianne Tung: Thank you, moderator. Good morning, everyone. It is a pleasure to be here together with you for this important session on the protection of subsea telecommunication cables, and thank you once again for being here. The underwater cables make up the foundation of the global internet infrastructure, enabling people, communities and businesses to communicate, share and innovate. The recent years’ incidents with damages to subsea infrastructure have reminded us how important it is to increase the resilience of this critical infrastructure. We’ve seen the incident with the North Stream Pipeline damages to subsea cables in the Baltic Sea and in the Red Sea, and once again we see war raging on European soil. As more than 99% of the intercontinental data traffic is carried by subsea communication cables have raised the awareness that we must better protect this critical infrastructure. Norway has intensified our efforts to increase the security of subsea cables. We are conducting service of subsea cables for detection and prevention of threats. We make use of innovative technologies to monitor the subsea cables, enable detection of threats and incidents, and quick notification and intervention. We are also establishing a close cooperation between the private sector and the civil and defence authorities. This way we can combine and maximise the knowledge and strength of the civil and private sector and the defence sector in this important work. We have seen the importance of clarifying the roles and responsibilities of owners of subsea cables, civil authorities and the defence sector. This experience from the Baltic Sea has shown us that such clarifications are needed for swift action when incidents occur. occur. But there’s no escaping that submarine cable infrastructure often go across both national borders and international waters. Therefore, it is crucial with both European and international cooperation to identify and implement effective security measures and the necessary regulatory framework. One good example of such cooperation was established in 2024 for the protection of critical subsea infrastructure in the North Sea between the North Sea countries, Belgium, the Netherlands, Germany, the UK, Denmark, and Norway. In May 2025, a similar cooperation was agreed on for protection of critical subsea infrastructure in the Baltic Sea with Denmark, Estonia, Finland, Germany, Iceland, Latvia, Lithuania, Poland, Sweden, EU, and Norway. We need a combination of national, regional, and international cooperation to achieve effective resilience measures and the necessary exchange of information about threats and sharing of best practice. Threats to subsea communication cables are not limited by national borders, so international cooperation is vital for protection of subsea cables, and together we can better advance new and innovative ways of securing these critical cables that the Internet is fully depending on. Thank you.


Giacomo Persi Paoli: Thank you very much, Minister Tang, for sharing these opening remarks and also for highlighting among other things the importance of cooperation, both between states and governments, but also public-private cooperation as a key enabler for the protection of subsea cables. And now I’d like to give the floor to Jarno Syrjala, Under-Secretary of State for International Trade, Finland, please.


Jarno Syrjala: Thank you, and good morning, ladies and gentlemen. It’s great to be here, and it’s my great pleasure to provide some remarks. on behalf of the government of Finland on protection of subsea telecommunications cables. The fundamental change in our security environment has implications for safety and resilience of our critical digital infrastructure. As recent incidents at the Baltic Sea have demonstrated, we have a clear need to better protect our critical undersea infrastructure. Trust in the digital systems is necessary for sustainable, inclusive digital future. The security of data and digital infrastructure are key concerns for countries from both national security and an economic standpoint. The security of digital systems and data increases trust, and trust adds to investments, welfare and prosperity. Combining the current threat landscape and our resilience, Finland has a long history of preparedness in all areas of life, including the telecommunications sector. For example, resilience requirements for public communications networks were deemed necessary several decades ago, and have been developed over time as technologies and usage needs have changed. Comprehensive telecommunications legislation and extensive resilience requirements, covering also submarine cables, have been implemented in Finland’s national telecommunications legislation. The NIS2 directive, focused on enhancing cybersecurity across the EU, was also transposed into national law in April 2025. One of the most important aspects of the NIS2 directive is the importance of the use of of the telecommunications resilience is a solid public-private partnership. Over the years, close cooperation between public authorities and private companies has been established in Finland. I would like to also underline the importance of international cooperation on the security and resilience of submarine cables. International cross-border cooperation plays an important role, for example in terms of supervision, building new capabilities and preventing disruptions. We encourage also other actors to engage in the international cooperation and partnership building on submarine cable resilience, including the multi-stakeholder community. NATO and EU have increased their resilience, response and deterrence, which help us protect against all incidents, intentional or unintentional. Most recent example of this practical cooperation to protect critical undersea infrastructure, including submarine cables, is the recent Memorandum of Understanding, which the Baltic Sea, NATO Allies and the EU have published in May 2025. Within International Telecommunications Union, ITU, we have endorsed the International Advisory Body Declaration on Submarine Cable Resilience, adopted in February 2025, and look forward to engaging in the working groups. Pleased to see the co-chair of the advisory body, Honorable Minister Tijani, taking an active role on these issues. In addition, the EU Action Plan on Cable Security defines four objectives. to address the challenges in the field of submarine cable resilience and security prevention, detection, response and repair, and deterrence. Finland endorses the actions and objectives defined in the action plan and is committed to them. Also, we are co-signatories of the New York Declaration on Submarine Cable Security. The Declaration aims to encourage countries to promote the integrity, security and accessibility of the submarine cable infrastructure, which is important for the digital economy and a prerequisite for the trusted connectivity. To conclude, our societies are increasingly dependent on reliable and secure digital connections that ensure free flow of information and support growth in the digital economy. Securing critical infrastructure is of primary importance for Finland. This is why we will intensify cooperation with like-minded countries and actors to strengthen the security of submarine cables. A lot of focus has been placed on using new technologies in protecting critical undersea infrastructure. We need a sense of urgency on this. We need to develop well-working mechanisms that are innovative and willing to experiment. With regard to submarine cables, we underline three areas with resilience as a priority. Adequacy of repair capacity, material preparation, as well as infrastructure monitoring and sensing capabilities. The momentum on submarine cable security is right now. It is important to enhance international cooperation on this. this topic. The multi-stakeholder community should also have a more prominent role in discussions on submarine cable resilience. And I’m grateful for our Norwegian colleagues to place more attention to this topic. Thank you.


Giacomo Persi Paoli: Thank you very much, Under-Secretary Siriala, for your remarks. For, again, stressing the importance of cooperation, both between states and within the multi-stakeholder community, but also for bringing to light the importance of trust and security as vehicles towards the resilience of digital information infrastructure. I’d like now to give the floor to Bosun Tijani, Minister of Communications, Innovation and Digital Economy of Nigeria.


Bosun Tijani: Please. Thank you so much for the opportunity, and good morning, everyone. It’s a privilege to, of course, be on this stage to contribute to this important conversation. One that is important, not just because, of course, we all know that digital economy is now literally the backbone of every economy in the world, but the fact that, you know, submarine cables are not just technical assets. These are literally the most important critical infrastructure that we can think of in the world today. And I think when you compare it to many other critical infrastructure, I don’t think attention is being given enough to actually how we protect it. And while we may be seeing more attention, I think we have to call out, in particular, the International Telecommunications Union, the ICPC, for the work they’ve done, but also the renewed focus on mobilizing actors and partnerships and collaboration to drive stronger attention on this cable. When you look at countries all over the world, and I can speak to Nigeria and, of course, a lot of other African countries, a lot of the long-edge challenges that we face, we’re seeing communications technologies as being one of the fastest ways. in which we can address so many of these challenges, whether you talk about quality education or being able to provide healthcare to literally everyone on our continent and in our countries. We see the role that digital technologies, connected technologies can actually play. We’ve seen the role of connected technologies in financial inclusion, for instance, which has changed the landscape significantly. I think the most popular ones would be what you have in M-PESA in East Africa. How many people we’ve been able to now bring into the financial system because we have connected technologies. It’s the same in my country as well, where financial technology solutions are now changing how we do things. And all these solutions will not be possible without the internet. I think the introductory remarks mentioned that 99% of the traffic that is actually carried on the internet is on subsea cables. So we can actually see why this is not just a technical asset. It is an important asset that not only do we need to protect it, we also need to worry more about the broader resilience of this cable. Which is why as a country we’re extremely excited to be participating in the international advisory body that ITU has put together. And this advisory body for us is not just another talk shop or opportunity to gather. It’s not one where we’re just talking about how do we come up with more laws to protect. But how do we also deploy for timely repair? Because sometimes the damages to the cable are not intentional. Natural disaster may also cause the destruction of these cables. How do we ensure that nations can timely come to the point where they can fix this cable? Because just a day or two or three of some of these cables being down can cause significant problems for economies. That’s why we’re extremely excited to be part of it. The second thing that the advisory body, it’s about that we find extremely useful is also how do we ensure that we can mobilize people to think more of the protocols around, and building of frameworks to improve the resilience around, which means in some countries you have only one cable, and we have opportunity for countries to be connected to more than one cable, right? This is also part of the framework that can improve the resilience within any country. In many countries you have these cables, of course these cables are not cables that you deal with in silos. We have about eight subsea cables in Nigeria, nearly all of them. I think all of them actually came through Portugal, and while coming through Portugal they passed through so many other countries. So this is something that you have to do in collaboration with so many countries. So we’re working on not just the repair, we’re working also on ensuring that we can increase the resilience by ensuring countries have multiple access to it. We’re also working on the diversity, is there a need to even have more of these cables in the first place? Not just the ones we have, do we need to have more of the cables? From that advisory body as a country, we’re now being inspired to set up a dedicated desk within our communications commission that is responsible for ensuring that the protocol within country is clear, but the clarity within country is also then translated to neighboring countries and partner countries, because you can’t do this in silo. And that’s one thing we extremely enjoy. The second thing is also the talent and the resources to be able to make these repairs when they happen. We’ve seen on the African continent a limited amount of ships that can quickly go and be deployed to help with the fixing, and there’s a limit to how much investment you can throw at it, because it’s not something that happens all the time as well. So it has to be an extremely calculated investment. What’s the optimal way to do it? This is something we’re thinking of. Another is talent. There’s a need for talent, ubiquitous talent that can actually also support, whether it’s in the maintenance or the repair of subsea cable. That’s also something that Nigeria is also prioritizing as well. And by extension to subsea cable, we’re then asking difficult questions even around fiber optic network as well. Because that’s what take the advantage of subsea cable to the people, and we need to think about when we’re thinking the sustainability and resilience, we’re now saying can we be thinking of these things in conjunction, not just one in isolation, because one feeds into the other. So we’re extremely happy to be part of this, and I think it’s something we’ll urge other partners to take seriously, that we don’t just look at loss only to protect them, but we also look at how do we make them a lot more resilient as well.


Giacomo Persi Paoli: Thank you, Minister Tijani, for also highlighting how resilience is not just about protection, as you just mentioned. There are many other components that is definitely the part of securing to protection, but there is also a very strong component that relates to redundancy, relates to mitigating and be able to react when incidents do occur. And also thank you for highlighting how the work conducted under the ITU is helping driving change at the national and regional level, and being a representative of the UN, that’s ultimately our best hope, is that through the work of these multilateral bodies, we can actually impact and drive change at the national and regional level. So thank you for sharing your remarks. And last, but of course not least, I would like to give the floor now to Liisa-Ly Pakosta, the Minister of Justice and Digital Affairs of Estonia. Please.


Liisa-Ly Pakosta: Thank you so much, and thank you for having me here. It’s a great honor for Estonia to participate here. So I have in a way a possibility to summarize why we are talking about this topic now. It is the situation that has changed, at least around this region where we physically are now. Let us remember that it was 1884. when the Paris Convention of Undersea Telegraphic Cables was agreed. And this was already then, because if the good countries established the undersea connections, there were, at the next moment, the bad guys who wanted to cut it down. So this is actually the situation where we are just now, as well, within the broader geopolitical situation. What we see around this area, where we are physically now, that the Russian shadow fleet is cutting down our connections. And it has been underlined several times already here, how important these connections are for our people, for our security, for our economies, for our hospitals, for our transport, name only. Estonia is a fully digital state, so all our government services are digital. Attacked by cutting down the undersea communication cables is actually not only a hybrid threat to our country, but it is a very actual threat to our country’s services to be actually there for our citizens. So we have seen a dramatic rise of accidents, so-called accidents, during the full-scale war in Ukraine. And I fully agree with my colleague, some of the incidents beforehand have been unintentional. But what we see now is that we see definitely the intentional cut down of the undersea cables. And the only way to handle this is, I will put it very short, that the good guys from like-minded countries, from like-minded organizations work together against, to stop the bad guys who want to take down the security of our people, who want to take down our hospital services, economy, transport, heating system, name only. So this is the actual question we are discussing now. What we can do together in order to beat the bad guys who want to harm us.


Giacomo Persi Paoli: Thank you, Minister Pakosta, for sharing your perspective. And Estonia, in fact, has been a champion of driving digital transformation for many years. So thank you so much for sharing your perspective. We still have a couple of minutes before we wrap up this first part of the panel. So I want to give all of you the opportunity, if you wanted to add anything to your remarks or to react to anything you have heard from your colleagues, this would be a good moment. We do have a couple of minutes left. Please.


Karianne Tung: Thank you, moderator. I think the panel has shown that we are completely dependent on the submarine cables. Our society, our digital society, for health care services, education, transport system and so forth. So being able to work together, both multilateral, but also multistakeholder, since many of these cables also are non-governmental and so forth. It’s important to bring the different actors together and to discuss how we can make them more resilient so that we keep connected both to society, but also internationally.


Giacomo Persi Paoli: Thank you. Please.


Bosun Tijani: I think the point I would love to add is that building resiliency into subsea cable shouldn’t be half to thought. I think for a long time this is a cable that we’ve dumped there and we’ve concluded that the risk to them is not severe. And as we’re saying, both intentional and unintentional risk to them will become severe. And because of their critical nature, I think resilience should be intentional. It shouldn’t be something that is afterthought. And what got me extremely passionate about this as a minister was when the cable cuts in the West African region happened last year, in March, and I saw firsthand the impact on society. Because we’re all working daily to move literally everything online. And if we’re moving everything online, if the backbone to this is at risk, it is a big challenge. And I was surprised at how, of course, we were fortunate, the private sector, because a lot of these cables are owned by private companies, the private sector came together. But as a minister, I didn’t have any answer to give to people. And people don’t often complain about companies when you have natural disasters. It’s the government that they look to for answer. And that’s why I think the work of the advisory board, the fact that ITU is prioritizing this, is extremely important. I don’t think it’s something we should push away. Some countries, some regions have the expertise, the framework, the know-how to be able to address this. You’d be surprised at how many countries and regions in the world have no clue where to start from. So even having things like regional redundancy and protocol, I think is something we should mainstream more. We saw the minister talked about the one in the Baltic region, but there are so many other parts of the world without this understanding. So we should collaborate more, share more, and ensure that collectively we can actually protect this critical infrastructure. Thank you.


Giacomo Persi Paoli: Please.


Liisa-Ly Pakosta: Thank you. Thank you very much for underlining this, because this is absolutely essential. We know sea as for ages connecting the whole world, not just regions. That is the fantastic part of sea. And also not only the ferries, but also the undersea cables. that we now know as technological possibility. And this is nothing we can do alone to protect them. So, I think Norway has put it very well and very timely, this topic here on the agenda, because really it is a global issue. Although we have some local issues, but in general what we need is a very clear universal set of rules to protect our citizens in all the continents. That is absolutely what we need.


Giacomo Persi Paoli: Thank you. Please.


Jarno Syrjala: Yeah, I think it’s very easy to echo what has been said about the international cooperation and the meaning of that, because we definitely are in a different kind of situations and there are different lessons there to be shared. And of course, I mean, when we talk about telecommunications or communication cables in general, so that’s only part of the issues, what is there lying beneath the waves. And in Finland, so we have for a long time, decades already, we have applied this kind of model of comprehensive security. So, these are also things that you have to connect to the other areas. So, how to keep the society keeping during a time of peace or during a time of crisis, but you have to have a holistic understanding of what is it all about.


Giacomo Persi Paoli: Thank you very much. As the screens in front of us are suggesting, we have come up to time for this first part of the panel. I would like to sincerely thank you for taking the time to share your experience and expertise with us, with the audience here in the room and online. And I do invite our audience to join me in a round of applause for Minister Tung, Under-Secretary Siriala, Minister Tijani and Minister Pakosta. Thank you very much. Thank you. Thank you so much. As we reconfigure the stage for the next part of the panel, I do invite you to watch a very interesting video on a specific application of a subsea cable technology, distributed acoustic sensing, and in the meantime we’ll prepare for the continuation of the panel. So over to the screen.


Session video: We can turn the ocean transparent and monitor whales by using distributed acoustic sensing. We have a network of fiber optic cables covering the world. Distributed acoustic sensing works by turning these cables into very long lines of virtual hydrophones. To record this data, one injects light pulses into a fiber cable. Some light is scattered back from impurities in the fiber and can be received by a DAS interrogator. Acoustic sources, such as whales, radiate oscillating pressure fields that stretch and compress the fiber. Variations in backscattered light tell us about the acoustic pressure fields at different points along the cable. This means that we can listen to the ocean at many separate points, creating tens of thousands of virtual hydrophones. This data is available immediately ashore at the end of the cable. Distributed acoustic sensing can revolutionize the way we understand and listen to the ocean, but we can go further than that. We can understand mechanisms of earthquakes, risks of landslides, avalanches, floods. For the ocean, we already have more than 1 million kilometers of fiber optic cables. What if we can use this as a global monitoring system?


Giacomo Persi Paoli: I’m now very happy to introduce the next set of speakers on stage. This distinguished panel of experts comprising representatives from government, industry, academia, and civil society will really help us unpack different perspectives on the evolving threat landscape as well as on actionable measures. to protect subsea cable infrastructure, and we just heard through the remarks of all of the four ministers that preceded this panel how multi-stakeholder cooperation is indeed a key component to building resilience. Over the next hour or so, the panel discussion will focus on four key components. First, we will look at the current and emerging threat landscape. We will also try to unpack what are some of the vulnerabilities in the digital systems that monitor, manage, and secure subsea cable networks. We will be diving deeper into applicable international law, voluntary norms, and emerging best practices relevant to subsea cable protection. And last but not least, we will try at least our best to come up with some recommendations for strengthening subsea cable security through technical policy and legal mechanism, including the role of public-private partnership. And now, without further ado, I have the pleasure of inviting to join me here on stage and online Camino Camino Kavanagh, expert and research fellow from UNIDIR, Steinar Bjornstad, strategic competence and research manager at TAMPNET, Evijs Taube, member of the management board from Latvia State Radio and Television Center, Sandra Maximiano, chair of the board of directors of ANACOM, and Kent Bressie, who is joining us online, legal advisor for International Cable Protection Committee. Please join me in a round of applause to welcome our speakers on stage. So, we have structured this as a conversation. We have some questions that we have prepared for our experts. If there will be time towards the end of the session, and if you would like to intervene, please do let me know. But I can’t make many promises because we have to finish at 1 p.m. sharp. So Steiner, I’d like to start with you. From an operator’s perspective, how do you integrate resilience into the design and management of subsea cable infrastructure, both technically and strategically, in high-risk regions like the North Sea?


Steinar Bjornstad: Very good question. So at Tampnet, we are an offshore telecom service provider. These type of services is really important these days because it’s also important for oil and gas. So we have a very critical infrastructure. It’s both mobile, it’s satellite, and it’s fixed links. And the thing is, it all depends on the optical subsea fiber cables. So they are really, really important for the services. And the capacity of these cables, it’s enormous, and it’s carrying a lot of traffic, and also for data centers out of Norway. So how to protect these cables, how to enable resilience? The thing is that you need to be able to do this already in the planning. So we have multiple cables, that’s the first thing. And if something goes wrong, we need to repair it quite fast. So we are a member of an alliance, so ensuring that we can have repair within a couple of weeks if something goes wrong. Also, because it’s very high capacity, it’s not that easy to switch this traffic electronically if something goes wrong from one cable to another. But we use optical switching, and even offshore we have optical switching. So we actually switch the light in the optical fiber cables. And by doing this we are able to protect very quickly, put the traffic over to another cable if one cable fails by some reason. So I think that is maybe the key things that we are doing for protecting the traffic.


Giacomo Persi Paoli: Thank you. Camino, I’d like to come to you now. And we’ve heard now, but even in the previous part of the panel, how different regions are experiencing slightly different considerations when it comes to the threat landscape. But building on the work that you’ve done, and if I’d ask you to zoom out and consider a little bit more what is the broader global picture. How is the threat landscape for subsea cable infrastructure evolving across different regions? And what are some of the key challenges that you have identified?


Camino Kavanagh: Thanks Giacomo, and thank you for the invitation to speak here. It’s a real honor to be on this panel with the other panelists. So I think I’ll zoom back in and then zoom out again. What I found interesting from the minister from Estonia, she mentioned the 1884 Convention. And what’s very interesting in some of the research I’ve been doing on damage to subsea cables, if we’re just looking at the submersed element of submarine cables, or subsea cables, was back in 1881, already a group of countries concerned about damage to cables in the North Sea, raised the issue in the pre-negotiations to the 1884 Convention. In 1882, a specific government brought statistics. to the negotiations of the Paris conference and highlighted that 60% of damage to cables was caused by natural events, 35% by unintentional acts due to accidents at sea or force majeure, and 5% due to gross negligence and some malign activities, and I think within that malign activities would have been very minimal. But that was in the build-up to World War I, and as we know during that period as well, state-backed interventions or damage, sabotage, espionage, et cetera, was increasing and was being introduced into battle planning. Let’s move forward a century plus, I think 143 years later, and those statistics wouldn’t have changed very much, although I think maybe the stats between natural causes and unintentional damage caused by accidents and so forth would slightly change in that sense. The number of accidents caused by intentional damage, the stats, it’s very hard to know, because as we know it’s very hard to ascertain responsibility for some of the incidents, but we do know that it’s a great concern for states, and particularly in the European context and Nord Stream, we know that it’s not just a concern with regard to states in this region, but also in the Baltic Sea, the Irish Sea, the North Atlantic, and so forth. But that differs significantly across regions, as was mentioned by Minister Tijani as well. Different regions are experiencing very, very different problems, and so that call for coordination especially coordination from a regulatory perspective, from an operational perspective, is very difficult when your problem set is also very different, and so reaching some kind of alignment there is critical. absolutely key and absolutely key is also engagement with industry. But I’m not going to go too far into that because I understand there are others that will talk about that problem.


Giacomo Persi Paoli: Thank you Camino for this at least first stab at the problem and I think it’s interesting because statistics are normally built on the data we have right and it’s always hard it is hard in the cyber domain and it’s probably just as hard doing that in the specific context of subsea cables to really have strong data on malicious or malign activities because we only hear of the successful attempts. What we don’t know is how much of the malicious activity that threatens or targets subsea cable infrastructure isn’t successful and that in probably if we had more visibility into that and we could do something even even more. But I’d like now to to give the floor to you Evis and your organization as you know speaking of incidents as recently experienced the direct impact of a subsea cable incident. Could you walk us through what happened and the immediate actions that were taken to respond to the disruption of service please.


Evijs Taube: Some some say that the cut cable or unplugged cable is the best way to protect against cyber cyber threats just the unfortunate side effect is the lost loss of communication but jokes aside but my big pleasure to be here and to tell the story and about the incidents on subsea cables of course especially last few years or or especially last couple of years the incidents especially in this area Baltic Sea nor North Sea has dramatically increased like in a normal life before the geopolitics politics changed the incidents happened time by time that’s not that’s not any news the fishing nets, et cetera, but the big incidents in numbers has significantly increased. So also, our company has been prepared, so we had the plans before, we had the drills, procedures, table exercises, algorithms, et cetera, et cetera, spare parts, but when the incident happens, basically everything becomes crystal clear, so not loss of communication, our company, our company’s customers and users immediately feel it, and talking about the impact of the incident, there basically are, let’s simplify the networking, so basically there are two parts, one part is the public internet, and thanks to the design, great design of internet, internet, we can say heals itself, so it rebalances and the public, normal users don’t feel it, and there is other part, let’s call them enterprise or data center to data center, connecting A to B, those guys usually, they should have a second, third or fourth route, if one breaks, everything switches over, like my colleague explained. So about the first part, as far as we know, nobody felt the impact, because the capacity of the connections is very huge, and just losing one cable or one cable connection, normal public don’t feel it. Of course we were speculating that there might be some minor examples, for example somebody was doing the stock exchange trading, where the latency is critical, maybe somebody lost some, we don’t know that, maybe somebody lost a game in Counter-Strike or something like that, where latency is important, talking about the latency. So, over cable is connecting Latvia, Sweden, so very important in terms of latency. When we lost the connection, the latency increased five to ten times, because the speed of light is constant, and then we cannot fight physics in that sense, but otherwise everything continued to work. How we fixed the thing. So, said we need to fix it as soon as possible, but it really depends on preparation. Do we have right spares, do we have right spare cable, the right joints, do we have the vessel stand by agreement, do we have right weather, the waves shouldn’t be higher than two meters, for example, for special vessels. So in our incident, we tried three times to recover, the third time was successful, and despite of February being a short month, we managed to fix it within a month, so 28 days, which is a good result in a winter storm. So all in all, we really had a good lesson, you cannot compare that practical lesson to the table exercise, and we learned a lot from it.


Giacomo Persi Paoli: Thank you, also for bringing to light some very concrete examples of the sort of incidents that can occur, but also the 28 days is a remarkable result, but it shows that it’s not something that can be fixed overnight, so it does really require adequate planning and adequate resourcing, otherwise 28 days can then be extended even further. I’d like now to come to you Sandra, and given the strategic location and tradition, Portugal is a very relevant player. in the submarine cable industry, and through your work with ANACOM, the National Regulatory Authority for Communications in Portugal, and more recently with the ITU Advisory Body for Submarine Cable Resilience, what practices do you recognize as being most effective for strengthening subsea cable protection at the national level, as well as across jurisdictions?


Sandra Maximiano: First of all, thanks a lot for the invitation. It’s really a great honor to be here and talking about this so important topic. First, I would also like to tell you that Portugal geography places us at the crossroads of global connectivity. We have one of the largest exclusive economic zones in the world, and the long tradition of cable landings. Portugal is uniquely positioned to strengthen its role in the field. So in fact, submarine cables already link us directly to multiple continents. Also, we have two autonomous regions, Madeira and the Source, which are composed by islands, and they depend almost entirely on submarine cables for communication. So that makes us also having this privileged position, but it comes with special responsibilities for ensuring the resilience of submarine cable systems. So we truly believe in four, I would say, key aspects. First, to build redundancy and route diversity. Second, strategic preparation and predictive maintenance. And third, protection zones, so building these protection zones and promote rapid repair capacity. So after all, we cannot prevent every incident. Submarine cable faults are inevitable, as it has been. question we have on how can we or how can it be developed, and how can it be architected so educated people can go to the markets and compete with the hybrid works. There is also some small setbacks which It also suggests a sudden social pause, amongst other things for creating the resilience, ensuring continuity of service of disruption, and these will probably involve, I’ll say four important points, establish multiple geographical diverse cables routes and alternative routes, including satellite backups and terrestrial connections. Avoiding strategic choke points to minimize congestion and high-risk areas, which are more susceptible to sabotage or accidents. And deploying armored cables and burying cables deeper in higher-risk areas. And this will be four important points for planning and building redundancy. Second, I’ll mention strategic preparation, which includes building intelligence into our networks, so they can adapt in real-time. Technologies like software-defining networking and AI analytics allow dynamic rerouting and predictive detection. This agility reduces downtime and boosts resilience. Third, we need collective mechanisms to support repair capacity, especially for regions and countries that lack those resources, to respond on their own. This is particularly important for island states and remote regions. And, at last, we need to promote rapid repair, and it’s really important. For that, we need licensing and permitting procedures that should be simplified and more flexible. And, of course, promote investment in repair vessels and joint capacity. These priorities cannot be postponed. Equally important is having clear plans for incident response, setting strict deadlines for repairs, and establishing priorities levels so that, in case of multiple simultaneous failures, the most critical links, those essential for national security and public welfare, are restored first. So, it’s very important to know which critical infrastructures are in every country so we can establish these priorities. We’re also at a time when technological innovation, particularly artificial intelligence, is reshaping the landscape. The training and deployment of large AI models demand massive computational capacity, as we know, and energy-intensive data centers, which, in turn, depend on robust, high-capacity connectivity, also submarine cables. This is not just about speed, but about enabling an entirely new digital paradigm. Anacom is actively monitoring these trends to ensure that our regulatory framework anticipates infrastructure bottlenecks and ensures sustainable high-capacity connectivity. So, in a very high-speed, I would say, technological change environment, we need regulation to keep the same pace, and that’s what Anacom is investing in nowadays, to keep the same pace as innovation goes, to have a proper regulatory framework. that can enable the redundancy and resilience of marine cables.


Giacomo Persi Paoli: Thank you. Thank you very much for your initial overview of some key topics. Again, building on the idea that resilience is intentional. It’s not something that can be responsive to the need. The issue of preparedness and the issue of regulation, which are very important. Kent, thank you so much for patiently waiting online. It is a pleasure to see you on the screen. We’ve heard now and before the importance of public-private partnerships when it comes to the protection of subsea cables. And through your work at ICPC, you engage substantially with both governments and industry. So from your perspective, what are the most effective ways to strengthen public-private partnership in responding to cable-related threats?


Kent Bressie: Thank you, Giacomo, for allowing me to participate remotely. I am actually currently on holiday in Greece before I teach oceans law as part of the Rhodes Academy of Oceans Law and Policy, which runs each year here in Greece. It’s also nice to see my fellow panelists. To your question, more than anything else, we need more and better awareness and communication between and among submarine cable operators, other marine industries, and governments at the national, regional, and multilateral levels. These are never-ending tasks. They are not one-time things, but we really see a need for ongoing dialogue among these stakeholders at all levels. In particular, governments need to understand what industry already does to promote cable protection and resilience in the design and operation of systems, and also to recognize those actions that governments are uniquely positioned to take, particularly with political and military responses. to intentional damage. In some cases, industry and governments have shared tasks. We also need better understanding of risks and threats to cables. The ICPC, in particular, has a lot of very good data on that, but I’m not sure that it’s always recognized or understood. We also need to understand the interrelationships between unintentional and intentional sources of damage and the fact that the cause of damage is not always immediately known. In the design and development phase, submarine cable operators embed cable protection and system design by selecting routes and landings that balance connectivity needs with risk mitigation and geographic diversity, which Sandra and others today have taken note of, all of which strengthens resilience. In the operating phase, subsequent other marine stakeholders and to publicize the locations of cables, which I’ll return to in a second as that’s become increasingly fraught. Governments don’t need to dictate or duplicate those particular actions, but again, there are some actions that only governments can take. The ICPC launched in 2021 its best practices for governments for cable protection and resilience to highlight the ICPC’s own thinking about this. In particular, the best practices advocate for a holistic approach to risks that minimizes infrastructure damage and promotes continuity of communication, even in the event that there is infrastructure damage. So the best practices are not a lengthy document. They were meant to be very user-friendly, about 12 pages long, and have some very specific best practices. This is highlighted, including the use of default separation distances between cables and other marine activities. As other uncoordinated marine industries, whether it’s wind farms, oil and gas development, seabed mining, vessel anchorages, or fishing can damage cables. Having a single point of contact within national governments. Adoption of cable protection laws and measures and implementation of them. Minimization of cabotage and crewing restrictions, customs duties, taxes, and fees, as this is very much a maritime classification of submarine cables as critical infrastructure in order to secure government resources for protection. This is the subject of a forthcoming study that Camino is publishing. The sharing of risk and threat information between governments and industries we have seen with recent cable damage incidents in the Baltic and the Arctic. There’s still a lot of work to be done there. But so far, I think that a lot of government engagement with the industry in response has so far been very productive. The use of technology such as fiber sensing is also very promising. And then ratification and implementation of the Law of the Sea Convention and the 1884 Convention, both of which established rights and responsibilities for states, including some key tools that relate to cable security. So a lot of these best practices were later incorporated into the New York Statement. Many of them were echoed by Sandra, our previous. panelist speaker. So I’ll just finally note that it’s important for governments to understand how their own policies and regulations can potentially undermine cable protection and resilience, because we’re very concerned about this as an industry, and our best practices also address this. In some jurisdictions, we see national security-oriented regulation creating massive delays for installation and repair permits, and this ultimately undermines development of additional and diverse systems that promote that resilience and allow recovery of damaged systems. We see that regulations oftentimes encourage clustering of cables and landings in narrow corridors to get cables out of the way of offshore energy development or fishing, and that can magnify the risk that a single event will damage multiple cables and disrupt connectivity. And finally, we see a renewed push by some governments to remove cables from nautical charts. This is woefully misguided. Given that approximately 70 percent of cable damage each year is caused by fishing and anchors, removing cables from nautical charts would significantly increase those risks and make it impossible for cable owners to pursue damages claims, because no one would know where the cables are. So ultimately, I don’t think that we need elaborate new regulatory constructs or the like to encourage engagement between governments and industry, but we do need to leverage existing agreements, data, and tools to promote cable resilience, protection, and security, and inform laws, policies, and coordinating mechanisms.


Giacomo Persi Paoli: Thank you, Kent, for sharing. your first remarks with us and, you know, having served 15 years in the Navy, the thought of having subsea infrastructure that is not on nautical charts is terrifying in more than one way. But thank you also for bringing to light the best practices that ICPC has developed that I do invite everyone that is interested in consulting. But also for, you opened by stressing the need for continuous dialogue among different stakeholder groups and I think this is key, this is what we’re trying to do, but of course in an hour and a half we can only scratch the surface, there is much more that needs to be done on a continuous basis, even for the simple reason that people change and rotate, particularly in government you may have a very fruitful dialogue today and in six months your counterparts have rotated and have gone to other posts and other jobs and it is important that this dialogue, that there is a mechanism to really make sure this dialogue is continuous. I’d like to go back to our panel for a second round of questions and coming back to you Evis about the incident that you described earlier, and so following the cable disruption, how did your organisation coordinate with national authorities, international partners and any other stakeholder that was relevant, and what are some of the key lessons that emerged about effective collaboration and communication?


Evijs Taube: I would call it different communication channels and first of all they’re very important and not only about like any subsea cable incidents, like any big incidents, we call it like crisis management within a company, so the team should be, like core team of crisis management should be very precise. should be trained, shouldn’t be bigger than needed. Everybody has to know exactly what to do. And that’s also a question of training. As I mentioned, the table exercises. Second is communication with partners. And here we are touching different stakeholders. It includes authorities. It includes international partners. Because like any subsea cable, in most of the cases, it’s connecting countries, right? So you’re connecting one country to another. In our case, Latvia, Sweden. You should know. In our case, we have the established communication line. I’m not talking about cable communication, but about human communication line connecting to NOX 24 by 7. So you always have, like within a minute or seconds, you know to whom to talk to, right? And that also should be documented in the best case, trained, practiced, et cetera. And then the third part is public communication. Public communication also is very important. And you shouldn’t be silent after the incident for days. In our case, we had prepared and we call it routine. Routine press releases daily. You shouldn’t open too much information. Because in many cases, that’s kind of sensitive information, right? What happened in which place. But you shouldn’t also be totally silent. So you should really feel and feel the balance. What to disclose, what don’t disclose. And that’s also very critical. About the communication with different parties involved, it includes already. In subsea cable cases, it includes, of course, the Navy, the military side. Those algorithms and procedures also have to be established before any incidents to be in place, and it also requires training, requires preparation, should be also algorithms written on a paper, trained, drilled, et cetera. So it’s all about preparation. And then in a practice, in the worst case, of course, when an incident happens, you can try it in real life, and there are lessons always. So like any preparation, any plans, who said, Truman said, that any plans go to waste, but the most important is to do the plans.


Giacomo Persi Paoli: Thank you. And we’ll come back to this plan and prepare issue later, but I now would like to come back to you, Steiner, about, we’ve heard already how it is important to be able to respond as quickly as possible to minimize disruption. So my question to you is, from a technological standpoint, what are some of the emerging tools or innovations that you see as most promising for detecting, mitigating, or responding to threats to subsea cables?


Steinar Bjornstad: Yeah. And the answer, I think, is actually to combine a lot of tools, to monitor, combining several tools, getting a lot of data. And I think I would like to tell a story how this started. It started with trawlers, probably. We didn’t actually know it was cable cuts, and what was it? We just noticed the light went off, and what happened? And then the first tool that came at hand was AIS information. That is GPS information sent from vessels, also containing some information about their activity like, for example, trolling. And by actively monitoring the activity around our cables, we now have a knowledge of what is going on, which vessels are moving around these cables. But that is on the surface, not underwater, not subsea. And some of these vessels, they just turn off their AIS information. So it’s still challenging. And then we started exploring fiber sensing. Fiber sensing actually means that the fiber works as a microphone or an array of microphones. So we can listen underwater to what is going on. By doing this, we are able to see and listen if a troller is approaching. And we can see it like two kilometers away. So we actually see the subsea activity. And that really enables us to take action very early. But the thing is that trollers, they are crossing the cable like ten times every day. It can be as much as that. So what we actually need to know is also if the cable is hit. Because normally this is okay to pass over the cable because it’s buried. And it should be well-buried, protected. But we can’t control the environment. And there are water currents that may make the cable exposed. So we use a different fiber sensing technology that now also gives us statistics on small hits. And by doing this, we can also see where the cable may be vulnerable. So, combining all these technologies, we know where the cable may be vulnerable, if there is something approaching these vulnerable points, and also statistics on what is going on over the cable, what type of vessels are crossing. So today, most of our network is actually covered with fiber sensing techniques, and we have quite a good overview of what is going on.


Giacomo Persi Paoli: So I would say that, would you say that situational awareness, if you want, has improved significantly in recent years, thanks to technological innovations? Yeah? Okay, good. Thank you for that. And Camino, I’d like to come back to you, and building on the insights shared by panelists, as well as drawing from your own research with Unidear, what are some additional other policy or technical approaches that you’ve seen states and other stakeholders exploring to strengthen the protection and resilience of subsea cables?


Camino Kavanagh: Okay, so basically what we did in the research, we broke government approaches down into three different areas. So how government actions, whether it be policy, regulatory, or operational activity, how it contributes to the actual resilience capacities of the systems themselves. So we looked at the absorptive capacities, which would be the kind of preparedness, ensuring all the procedures and protocols, the regulation and so forth, including many of the best practices that Kent mentioned, and that they are in place in the event that something happens. And now the systems are actually built bearing in mind that something will eventually happen, whether it be regardless of the cause. So the second area that we looked at was the responsive capacities of the system. So what are governments doing to actually prepare and support some of the responses required, and I think Sandra mentioned issues related to repair capabilities and capacities. I think the previous panel discussed workforce challenges and so forth. Industry has a significant focus on workforce at the moment, trying to attract young talent into the industry, but it’s difficult. Also governments need to do the same, and we also have to bear in mind that there are some very small governments, small countries that have limited capacities and that have limited resources, so being able to invest in DAS and so forth is a luxury, and one also has to bear in mind that it also has jurisdictional complications, so there are lots of challenges on that front. Also within the restorative capacities, we are seeing that a number of states are actually looking and are investing in market analysis and so forth to see where it would be best, where the better use of their resources would lie and so forth. There are a range of other issues there in that rubric that we touched upon, which I can come back to. The final tranche that we looked at, or the final area, was in adaptive capacities, and we also often forget about those, but I think colleagues here have also talked about learning from incidents, regardless of the cause again, what can we learn from those, and how do we adapt? How do we adapt our national structures, procedures, regulations, and so forth, to be able to prepare and respond to incidents should they happen? And a final thing that cuts across these different areas is that we’ve been talking mainly about the submersed part of the subsea cable system. We’ve touched upon it slightly, but there’s also the network layer, the supply chain issues, a range of other issues. There’s the repair fleet, the store, the supplies, and so forth. So there are a range of different areas and I think connecting these both through government action in our crisis management, emergency planning and so forth is absolutely critical and I don’t think any government is there yet. So how you bring together all of those elements but with working in conjunction with industry as well as academia is absolutely fundamental.


Giacomo Persi Paoli: Thank you Camino. Sandra, I’d like to come back to you with a follow-up question on, you know, given also your role within the advisory body on submarine cable resilience, how do you see international initiatives such as this one foster better cooperation and strengthen the resilience of subsea cable infrastructure?


Sandra Maximiano: So that’s one of the main purposes of the advisory body for submarine cable resilience. And that is one main purpose because definitely collaboration is vital in this case. So to enhance connectivity, to stimulate innovation and promote resilience of submarine cables, and if you look at all these, they are multifaceted tasks and they require collaboration. So across different organizations. So we need governments, we need industry, we need academia and international organizations to work together. So over the past year, ANACOM has deepened our partnerships, recognizing that each player plays a unique role. For instance, governments and regulators create enabling frameworks, as I also mentioned before. Academia advances research and innovation, it’s extremely important. Industry builds and operates vital critical infrastructure. So we need all to work together. And having said this, the challenge is in ensuring that these diverse players speak the same language. And sometimes it’s very difficult because, of course, all of these players, they have their own… interests, they maximize, now speaking like an economist, maximize their own utility and interests, but we need to align their efforts to hard a common goal. And the advisory bodies is an example, it’s a multi-stakeholder forum where we try to do that. So we try to have all these organizations together and working together and trying to align our language for this common goal. So at TANACOM we see this as first-hand and we are very actively fostering an ecosystem that encourage investment in submarine cables and associated infrastructure. We remain committed in leading this agenda at both European and international levels. At European level we do it mainly through BEREC, the body of European regulators for electronic communications and in collaboration with European Commission. In BEREC, for example, TANACOM is a co-leader of the BEREC report on domestic submarine cables in different member states, together with a national regulatory authority from France, ARCEP. We are also very active at European Union Agency for Cybersecurity and this is an important topic there as well. And finally, as mentioned, I’m very proud to co-chair the International Advisory Group for Submarine Cable Resilience, which, as I mentioned, provides a unique global platform for collaboration. So the International Advisory Body, for some that maybe are still not familiar, was launched by the International Telecommunication Union in partnership with the International Cable Protection Committee, ICPC. And this partnership is a significant and fortunate development, combining the ITU’s capacity to promote worldwide dialogue on digital matters with ICPC’s expertise in submarine cable resilience. which I believe to be a very fortunate and needed collaboration. In addition, 40 outstanding personalities from both the public and private sectors across the world are part of the advisory body. So this ensures a diverse knowledge and experience, including contributions from countries ranging from large economies to small island states. So this diversity is extremely important. The role of the advisory body is to promote these open conversations, build trust for the benefit of global community. And we aim at ensuring that discussions are based on technical merit and best practice. I think in my personal view as well, I think we should give special attention to regions, countries and remote islands where economic incentives for prompt response mechanisms are lower. But, of course, their response is important for everyone. So the incentives are there. So we should work in collaboration to increase the response capacity of these small states. The advisory body has made very decisive progress. In particular, I would like to mention in February in Abuja, Nigeria, the International Submarine Cable Resilience Summit. And the body approved the Abuja Declaration, making a key milestone for submarine cable resilience and paving the way for greater international cooperation. Secondly, the body established clear priorities for 2025-26 and decided to form three thematic working groups responsible for deliverable concrete outcomes. These groups will address submarine cable resilience from multiple complementary perspectives. One of the working groups will focus on resilience by design, examining the importance of ensuring service continuity through redundant and diverse communities. communication routes, another working group will focus on timely deployment and repairing of submarine cable systems, exploring how regulatory measures can expedite this process, and the third working group will be dedicated to risk identification, monitoring, and mitigation. So with this framework, we’ll assess the application of new technologies and monitoring systems. So I think it’s, as I said, it’s composed by experts from different regions and stakeholders, and given the progress that we made so far, I’m totally confident that the advisory body will remain committed to ensuring the submarine cables are safe and resilient. So just to conclude, preparing for the future, and especially in this matter, is not a task of one individual body. We must work together, and we share responsibility among regulators, industry, academia, and international communities. So we should cooperate openly, pragmatically, and globally in this case.


Giacomo Persi Paoli: Thank you, Sandra. Kent, I’d like to come back to you for a last question. Now with growing international attention to subsea infrastructure protection, how can new initiatives complement and avoid duplicating existing efforts like those led by ICPC and others?


Kent Bressie: Thank you, Giacomo. Well, first, I’d like to start just by noting that I think Sandra laid out very clearly the amazing collaboration that we have between the ITU and the ICPC under her leadership and that of Osun Tejani, who was on our prior panel, and I think that can be a model for leveraging the industry expertise and experience of the ICPC. PC without duplicating it. But I think it’s also important to understand, Sandra was very generous and helpful in describing the advisory body, but I’m not sure that everyone has a good understanding of what the ICPC does either. So I thought I’d note just briefly that the ICPC was founded in 1958, and it’s the world’s leading organization promoting submarine cable protection and resilience. It’s an NGO that works with its members, governments, international organizations, other marine industries, and the scientific community on a number of key tasks. First, to identify and mitigate risks of natural and human damage to cables. The ICPC has developed the world’s leading databases of cable damage information, and also repair time frames, which are key inputs for the work of the advisory body. The ICPC has developed recommendations and the best practices for governments that I mentioned earlier for the entire cable project lifecycle. The ICPC promotes scientific research regarding cables in the marine environment. This is even more critical in light of the BBNJ agreement under the Law of the Sea Convention. And the ICPC just published in partnership with the United Nations Environment Program World Climate Monitoring Center a report titled Submarine Cables and Marine Biodiversity, which we had very much promoted and provided resources for as a resource for governments as they implement the BBNJ agreement. We also work to promote, and this is particularly my task, the rule of law for the oceans, particularly ratification and implementation of the UN Convention on the Law of the Sea. And also, as Camino was mentioning, the 1884 Cable Protection Convention on which there is renewed focus. So the ICPC has more than 240 members from approximately 75 countries. Those are industry representatives, but the ICPC also has about 20 government observers, and we welcome more formal government observers, but also engagement even for those governments who are not observers. So we see, as I noted before, a need for continuing engagement and communication. And this is not just between the ICPC and governments and other marine stakeholders, but also the regional cable protection committees that focus on more localized issues around the world. These aren’t formally subsidiaries of the ICPC, but we coordinate closely with them to avoid duplication of work. We have regional cable protection committees that are very active in submarine cables association, the North American Submarine Cable Association, recently established committees in Africa, and the Oceanic Submarine Cable Association. We do not have regional cable protection committees in that area. The ICPC and these other organizations are all keen to work with governments on a range of initiatives and don’t view the recommendations and principles as they advance in a proprietary fashion. As I noted before, the fact that the New York statement included recommendations the ICPC had previously articulated was very flattering, but we would like to see greater adoption by more and more bodies of that. So other ICPC recent engagements, obviously the international advisory body is a key focus for us. co-executive secretary and we’re grateful for the leadership of our co-chairs who are with us today. We held a critical Law of the Sea workshop for our members and regional academics in Singapore last year with the support of the Australian and Singapore governments looking at key issues with the BDDNJ agreement, regulatory and permitting issues, and security among others. We have worked with the UNODC to develop cable resilience plans in the Indian Ocean region and it’s a really interesting and helpful model I think for a lot of countries in considering how to bring together stakeholders to think more in a more integrated way about connectivity and cable protection. And then I serve on the International Law Association’s Committee on Submarine Cables and Pipelines which is developing guidelines to address prevention, monitoring, and responses under international law to intentional cable damage. As I noted before, we’re unlikely to get a new treaty addressing some of these issues but the ICPC’s view is that countries have existing tools under international law that they can and should use and we certainly point to the government of Finland as having made good use of those tools. So finally there’s a need for better communication and coordination on issues and multilateral processes so these aren’t just sort of looking at best practices initiatives but also looking at collaborating on some critical issues and other fora globally. These include work in the International Seabed Authority to ensure cable protection and resilience in relation to deep sea mining which is increasingly an issue given the push for critical minerals. and development of green technologies, the impact of the new BBNJ agreement on cable routing and permitting, and cable damage by dark fleet ships, which we remain concerned is something that the international community, including the IMO has not yet been able to address effectively. So in general, I think we have the data analysis, recommendations, and potential legal tools that we need,


Karianne Tung: and which shouldn’t necessarily be duplicated, but we need much better and more comprehensive global implementation. Thank you.


Giacomo Persi Paoli: Thank you, Kent. We have just a couple of minutes before we have to wrap up this panel. So before I share with you some concluding remarks from my notes, I just wanted to give the panelists literally 20 to 30 seconds maximum, if there is one key takeaway that you would like the audience to walk away after this hour and a half discussing subsea cables, what would that be? And you can only add to whatever the other person has said. So starting with you, Camino.


Camino Kavanagh: Prepare and exercise your preparedness.


Giacomo Persi Paoli: Thank you.


Session video: Be prepared, monitor what is going on, situational awareness.


Evijs Taube: I would mention again, this dust systems, which were mentioned. So every cable, existing cable is a big asset, and we can call it a big sensor. If we install, already installed, also on the new cables, distributed or centralized, whatever integrated system of such a sensors in a particular area, for example, Baltic Sea, which is very compact sea. Some call it Baltic Lake as a test bed. And that would give a big benefit, not only protecting the cables, but to understand what is going on under the water. There is a shadow fleet, there is normal fleet, there is other fleet, but nobody knows what is happening. If shadow fleet switches off the AIS system, it goes invisible. with such sensors, we can immediately see something better.


Giacomo Persi Paoli: Thank you. Sandra?


Sandra Maximiano: And as I said, I think collaboration is the key. This is a very important matter. It’s a multi-stakeholder issue and we should work together and, as I said, involve everyone, governments, academia, regulators, international organizations, and increase awareness and, of course, work on the best practices. I think booklets of best practices to act are very important and especially some countries are at a different speed, but in this issue we should all try to align the speed that we move on because, as I said, small states, remote islands, they can’t be left alone because it will have a negative impact for all of us.


Giacomo Persi Paoli: Thank you. Kent, any final thoughts on your side?


Kent Bressie: Yes. Convene and communicate with stakeholders. Negotiations are really a complicated place.


Giacomo Persi Paoli: Thank you. So, very quickly, because time is up, what I took away from this hour and a half is the following. Strengthening the resilience of subsea cable is a team sport. It’s a team sport that requires different players knowing exactly what to do and working together as a team. No individual player can really achieve this ambitious goal without working with others. But also that resilience has different components. One is protection. Protection is key, but it’s also necessary, but it’s not sufficient. It has to be matched with adequate planning, because no matter how well we protect our cables, incidents are going to happen, and it’s important to have plans that are well thought in advance, so resiliency cannot be improvised. It has to be made by design through careful planning. But while planning is essential, plans are useless unless they are put in practice through concrete measures of preparedness, which include dialogue and discussions and cooperation between states, between public and private cooperation. And this cooperation needs to be tested through exercises, through crisis management drills that really bring to the surface all of the possible mechanisms that need to be improved, because ultimately the last important pillar is the pillar of response that needs to be effective, needs to be quick, in order to minimize disruption and make sure that we work towards resilience, redundancy, and ultimately continuity of service. With that, I hope that the summary does not do justice to an hour and a half in discussion, but hopefully it touches on the key points, and we’re out of time. I do invite you to approach our speakers and experts after the session if you would like to ask more questions. All is left for me to do is to thank you, the audience, for engaging or for being here in person and online, the government of Norway, not only for being amazing hosts, but also for partnering with UNIDIR in organizing this session, and of course, last but not least, to our excellent experts and speakers that took their time to share their knowledge with us. So please join me in a round of applause for our speakers. Thank you.


G

Giacomo Persi Paoli

Speech speed

141 words per minute

Speech length

2806 words

Speech time

1190 seconds

Subsea cables carry over 99% of global intercontinental data traffic, making them critical digital infrastructure

Explanation

Giacomo Persi Paoli emphasizes that subsea telecommunication cables are a hidden but critical infrastructure that carries the vast majority of global intercontinental data. Despite their criticality, they remain largely out of sight and out of mind, making them vulnerable.


Evidence

Over 99% of global intercontinental data traffic statistic


Major discussion point

Critical Infrastructure Vulnerability and Importance


Topics

Infrastructure | Cybersecurity


Agreed with

– Karianne Tung
– Bosun Tijani
– Liisa-Ly Pakosta

Agreed on

Critical Infrastructure Dependency


K

Karianne Tung

Speech speed

125 words per minute

Speech length

553 words

Speech time

263 seconds

Digital society is completely dependent on submarine cables for healthcare, education, transport systems

Explanation

Minister Tung argues that modern digital society relies entirely on submarine cables for essential services. The underwater cables form the foundation of global internet infrastructure, enabling communication, sharing, and innovation across all sectors of society.


Evidence

Healthcare services, education, transport system dependencies mentioned


Major discussion point

Critical Infrastructure Vulnerability and Importance


Topics

Infrastructure | Development


Agreed with

– Giacomo Persi Paoli
– Bosun Tijani
– Liisa-Ly Pakosta

Agreed on

Critical Infrastructure Dependency


Recent incidents with damages to subsea cables in Baltic Sea and Red Sea highlight increased vulnerability

Explanation

Minister Tung points to specific recent incidents that demonstrate the growing threats to subsea infrastructure. These incidents have raised awareness about the need to better protect this critical infrastructure as more than 99% of intercontinental data traffic depends on these cables.


Evidence

North Stream Pipeline damages, subsea cable damages in Baltic Sea and Red Sea, war in Ukraine


Major discussion point

Evolving Threat Landscape and Security Concerns


Topics

Cybersecurity | Infrastructure


Cross-border cooperation is crucial since submarine cables cross national borders and international waters

Explanation

Minister Tung emphasizes that submarine cable infrastructure often spans across national borders and international waters, making international cooperation essential. No single country can effectively protect these cables alone, requiring coordinated efforts between nations.


Evidence

Submarine cables crossing national borders and international waters


Major discussion point

International Cooperation and Governance


Topics

Infrastructure | Legal and regulatory


Agreed with

– Jarno Syrjala
– Bosun Tijani
– Liisa-Ly Pakosta
– Sandra Maximiano

Agreed on

International Cooperation Necessity


Examples include North Sea cooperation (2024) and Baltic Sea cooperation (2025) between multiple countries

Explanation

Minister Tung provides concrete examples of successful international cooperation initiatives. These regional partnerships demonstrate how countries can work together to establish frameworks for protecting critical subsea infrastructure.


Evidence

North Sea cooperation in 2024 between Belgium, Netherlands, Germany, UK, Denmark, and Norway; Baltic Sea cooperation in May 2025 with Denmark, Estonia, Finland, Germany, Iceland, Latvia, Lithuania, Poland, Sweden, EU, and Norway


Major discussion point

International Cooperation and Governance


Topics

Infrastructure | Legal and regulatory


Close cooperation between private sector and civil/defense authorities maximizes knowledge and strength

Explanation

Minister Tung advocates for establishing close cooperation between different sectors to combine their respective expertise and capabilities. This approach allows for maximizing the collective knowledge and strength of civil, private, and defense sectors in protecting subsea cables.


Evidence

Combining knowledge and strength of civil, private sector, and defense sector


Major discussion point

Public-Private Partnership Requirements


Topics

Infrastructure | Cybersecurity


Agreed with

– Jarno Syrjala
– Kent Bressie

Agreed on

Public-Private Partnership Requirements


Use of innovative technologies for monitoring cables, threat detection, and quick intervention

Explanation

Minister Tung highlights Norway’s efforts to intensify security through technological solutions. These include conducting surveys of subsea cables and using innovative monitoring technologies that enable detection of threats and incidents with quick notification and intervention capabilities.


Evidence

Conducting surveys of subsea cables, innovative monitoring technologies, threat detection, quick notification and intervention


Major discussion point

Technical Solutions and Innovation


Topics

Infrastructure | Cybersecurity


J

Jarno Syrjala

Speech speed

116 words per minute

Speech length

819 words

Speech time

422 seconds

Geopolitical tensions have changed the security environment with implications for digital infrastructure safety

Explanation

Under-Secretary Syrjala argues that fundamental changes in the security environment have direct implications for the safety and resilience of critical digital infrastructure. Recent incidents in the Baltic Sea demonstrate the clear need to better protect undersea infrastructure in this changed threat landscape.


Evidence

Recent incidents at the Baltic Sea, fundamental change in security environment


Major discussion point

Evolving Threat Landscape and Security Concerns


Topics

Cybersecurity | Infrastructure


Solid public-private partnership is one of the most important aspects of telecommunications resilience

Explanation

Syrjala emphasizes that effective public-private partnerships are crucial for telecommunications resilience, particularly highlighted in the NIS2 directive. Finland has established close cooperation between public authorities and private companies over the years as a key component of their resilience strategy.


Evidence

NIS2 directive emphasis on public-private partnership, Finland’s established cooperation between public authorities and private companies


Major discussion point

Public-Private Partnership Requirements


Topics

Infrastructure | Legal and regulatory


Agreed with

– Karianne Tung
– Kent Bressie

Agreed on

Public-Private Partnership Requirements


International cooperation through NATO, EU, and ITU helps build resilience and response capabilities

Explanation

Syrjala highlights how international organizations and alliances contribute to building resilience and response capabilities for submarine cable protection. These multilateral efforts provide frameworks for cooperation and shared resources to address cable security challenges.


Evidence

NATO and EU increased resilience, response and deterrence; Baltic Sea NATO Allies and EU MOU in May 2025; ITU International Advisory Body Declaration in February 2025; EU Action Plan on Cable Security


Major discussion point

International Cooperation and Governance


Topics

Infrastructure | Legal and regulatory


Agreed with

– Karianne Tung
– Bosun Tijani
– Liisa-Ly Pakosta
– Sandra Maximiano

Agreed on

International Cooperation Necessity


Multi-stakeholder community should have more prominent role in submarine cable resilience discussions

Explanation

Syrjala advocates for greater involvement of the multi-stakeholder community in discussions about submarine cable resilience. He emphasizes the importance of enhancing international cooperation and giving various stakeholders a more significant voice in addressing these critical infrastructure challenges.


Evidence

Call for multi-stakeholder community to have more prominent role


Major discussion point

International Cooperation and Governance


Topics

Infrastructure | Legal and regulatory


Need for new technologies in protecting critical undersea infrastructure with sense of urgency

Explanation

Syrjala emphasizes the urgent need to develop and deploy new technologies for protecting critical undersea infrastructure. He calls for well-working mechanisms that are innovative and willing to experiment, highlighting three priority areas for resilience.


Evidence

Three priority areas: adequacy of repair capacity, material preparation, infrastructure monitoring and sensing capabilities


Major discussion point

Technical Solutions and Innovation


Topics

Infrastructure | Cybersecurity


Comprehensive security model requires holistic understanding connecting to other critical infrastructure areas

Explanation

Syrjala explains that Finland applies a comprehensive security model that has been in place for decades. This approach recognizes that telecommunications cables are only part of the broader infrastructure challenges and requires connecting cable security to other critical areas to maintain societal functions during both peace and crisis.


Evidence

Finland’s decades-long comprehensive security model, holistic understanding of infrastructure beneath the waves


Major discussion point

Regional and Global Coordination Challenges


Topics

Infrastructure | Cybersecurity


B

Bosun Tijani

Speech speed

171 words per minute

Speech length

1370 words

Speech time

480 seconds

Cable cuts cause significant economic and social impact, with ministers having no answers for citizens during outages

Explanation

Minister Tijani describes the real-world impact of cable cuts, particularly referencing the March incident in West Africa. He explains how governments are looked to for answers during natural disasters, but ministers often lack adequate responses when critical cable infrastructure fails, highlighting the governance gap in cable resilience.


Evidence

March cable cuts in West African region, personal experience as minister having no answers for citizens, people looking to government rather than companies during disasters


Major discussion point

Critical Infrastructure Vulnerability and Importance


Topics

Infrastructure | Development


Submarine cables are not just technical assets but the most important critical infrastructure globally

Explanation

Minister Tijani argues that submarine cables should be viewed beyond their technical function as the backbone of the global digital economy. He emphasizes that compared to other critical infrastructure, insufficient attention is being given to protecting these cables despite their fundamental importance.


Evidence

Digital economy as backbone of every economy, comparison to other critical infrastructure receiving more attention


Major discussion point

Critical Infrastructure Vulnerability and Importance


Topics

Infrastructure | Economic


Agreed with

– Giacomo Persi Paoli
– Karianne Tung
– Liisa-Ly Pakosta

Agreed on

Critical Infrastructure Dependency


Countries need multiple access points to cables rather than single cable connections

Explanation

Minister Tijani advocates for improving resilience by ensuring countries have multiple cable connections rather than relying on single cables. This diversification approach is part of the framework being developed through the ITU advisory body to improve resilience within countries.


Evidence

Nigeria has about eight subsea cables, framework for countries to be connected to more than one cable


Major discussion point

Resilience by Design and Redundancy


Topics

Infrastructure | Development


Resilience should be intentional and built into design, not an afterthought

Explanation

Minister Tijani argues that building resilience into subsea cables should be a deliberate, planned approach rather than something considered after the fact. He emphasizes that both intentional and unintentional risks to cables are becoming more severe due to their critical nature.


Evidence

Cables have been ‘dumped there’ with assumption that risks weren’t severe, both intentional and unintentional risks becoming more severe


Major discussion point

Resilience by Design and Redundancy


Topics

Infrastructure | Cybersecurity


Agreed with

– Sandra Maximiano
– Steinar Bjornstad

Agreed on

Resilience by Design Philosophy


Limited repair ships and talent for cable maintenance requires calculated investment and regional cooperation

Explanation

Minister Tijani identifies the scarcity of repair vessels and skilled personnel as key challenges in cable maintenance. He notes that investment in these resources must be carefully calculated since cable incidents don’t occur frequently, requiring regional cooperation to optimize resource allocation.


Evidence

Limited ships on African continent for deployment and repair, limited talent for maintenance and repair, need for calculated investment due to infrequent incidents


Major discussion point

Repair Capacity and Response Preparedness


Topics

Infrastructure | Development


Many countries and regions lack expertise and frameworks to address cable protection

Explanation

Minister Tijani highlights the global disparity in cable protection capabilities, noting that while some countries and regions have expertise and frameworks, many others lack basic understanding of where to start. He advocates for more collaboration and knowledge sharing to address this gap.


Evidence

Surprise at how many countries and regions have no clue where to start, need for regional redundancy and protocol mainstreaming


Major discussion point

Regional and Global Coordination Challenges


Topics

Infrastructure | Development


Agreed with

– Karianne Tung
– Jarno Syrjala
– Liisa-Ly Pakosta
– Sandra Maximiano

Agreed on

International Cooperation Necessity


L

Liisa-Ly Pakosta

Speech speed

121 words per minute

Speech length

475 words

Speech time

233 seconds

Estonia as a fully digital state faces actual threats to government services when cables are cut

Explanation

Minister Pakosta explains that Estonia’s status as a fully digital state makes it particularly vulnerable to cable attacks. All government services are digital, so attacks on subsea communication cables represent not just hybrid threats but actual threats to the country’s ability to serve its citizens.


Evidence

All Estonian government services are digital, attacks affect hospitals, transport, heating systems


Major discussion point

Critical Infrastructure Vulnerability and Importance


Topics

Infrastructure | Human rights


Agreed with

– Giacomo Persi Paoli
– Karianne Tung
– Bosun Tijani

Agreed on

Critical Infrastructure Dependency


Dramatic rise in ‘accidents’ during full-scale war in Ukraine, with intentional cable cutting by Russian shadow fleet

Explanation

Minister Pakosta directly attributes the increase in cable incidents to intentional actions by Russian shadow fleet during the Ukraine conflict. She distinguishes between historical unintentional incidents and the current pattern of deliberate cable cutting, framing it within the broader geopolitical context.


Evidence

Russian shadow fleet cutting connections, dramatic rise of ‘accidents’ during full-scale war in Ukraine, reference to 1884 Paris Convention showing historical pattern of bad actors cutting cables


Major discussion point

Evolving Threat Landscape and Security Concerns


Topics

Cybersecurity | Infrastructure


Need for universal set of rules to protect citizens across all continents

Explanation

Minister Pakosta emphasizes that while there may be local issues, the protection of subsea cables is fundamentally a global challenge requiring universal rules. She argues that seas have historically connected the world, and the technological capability of undersea cables continues this tradition, necessitating global governance frameworks.


Evidence

Seas connecting the whole world for ages, undersea cables as technological possibility connecting continents


Major discussion point

International Cooperation and Governance


Topics

Infrastructure | Legal and regulatory


Agreed with

– Karianne Tung
– Jarno Syrjala
– Bosun Tijani
– Sandra Maximiano

Agreed on

International Cooperation Necessity


S

Steinar Bjornstad

Speech speed

120 words per minute

Speech length

638 words

Speech time

317 seconds

Multiple cables and optical switching enable quick traffic rerouting when cables fail

Explanation

Bjornstad explains Tampnet’s technical approach to resilience through redundancy and advanced switching technology. They use multiple cables and optical switching technology, including offshore optical switching, to quickly redirect traffic when one cable fails, enabling protection within seconds rather than relying solely on electronic switching.


Evidence

Multiple cables, optical switching including offshore optical switching, ability to switch light in optical fiber cables, repair alliance membership for couple of weeks repair time


Major discussion point

Resilience by Design and Redundancy


Topics

Infrastructure | Cybersecurity


Agreed with

– Bosun Tijani
– Sandra Maximiano

Agreed on

Resilience by Design Philosophy


Repair alliance membership ensures cable repair within couple of weeks when incidents occur

Explanation

Bjornstad describes how Tampnet participates in a repair alliance that provides mutual support for cable repairs. This collaborative approach ensures that when cable damage occurs, repair can be completed within a couple of weeks, which is crucial for maintaining service continuity.


Evidence

Member of repair alliance, repair within couple of weeks if something goes wrong


Major discussion point

Repair Capacity and Response Preparedness


Topics

Infrastructure | Economic


Combining multiple monitoring tools including AIS information and fiber sensing provides comprehensive situational awareness

Explanation

Bjornstad describes how Tampnet evolved from simply noticing when cables failed to implementing comprehensive monitoring systems. They combine AIS vessel tracking information with fiber sensing technology to monitor both surface and subsea activity, providing early warning of potential threats up to two kilometers away.


Evidence

AIS information from vessels including trolling activity, fiber sensing detecting approaching trawlers two kilometers away, statistics on small hits to identify vulnerable cable areas, most network covered with fiber sensing


Major discussion point

Technical Solutions and Innovation


Topics

Infrastructure | Cybersecurity


Fiber sensing technology allows cables to work as underwater microphones, detecting approaching threats

Explanation

Bjornstad explains how fiber sensing technology transforms cables into arrays of underwater microphones that can detect subsea activity. This technology enables operators to see and hear approaching vessels like trawlers from significant distances, providing early warning capabilities for potential cable threats.


Evidence

Fiber works as microphone or array of microphones, can see trawler approaching two kilometers away, can see subsea activity, different fiber sensing technology for statistics on small hits


Major discussion point

Technical Solutions and Innovation


Topics

Infrastructure | Cybersecurity


E

Evijs Taube

Speech speed

129 words per minute

Speech length

1050 words

Speech time

487 seconds

Successful 28-day winter repair demonstrates importance of preparation, spare parts, and standby vessel agreements

Explanation

Taube describes their organization’s experience with a cable incident, emphasizing how proper preparation enabled successful repair within 28 days during challenging winter conditions. The repair required three attempts and depended on having the right spare parts, cables, joints, vessel agreements, and favorable weather conditions.


Evidence

28-day repair in February during winter storms, three repair attempts with third being successful, need for right spares, spare cable, joints, vessel standby agreement, weather conditions with waves no higher than two meters


Major discussion point

Repair Capacity and Response Preparedness


Topics

Infrastructure | Cybersecurity


Clear crisis management teams, communication channels with partners, and public communication strategies are essential

Explanation

Taube outlines the critical components of effective incident response, emphasizing the need for well-defined crisis management teams, established communication protocols with various stakeholders, and balanced public communication. He stresses that all these elements require advance preparation and training.


Evidence

Core crisis management team should be precise and trained, established communication lines with partners including authorities and international partners, daily press releases with balance between disclosure and sensitivity


Major discussion point

Crisis Management and Communication


Topics

Infrastructure | Legal and regulatory


Importance of established communication lines with international partners and 24/7 contact protocols

Explanation

Taube emphasizes the critical need for pre-established communication protocols with international partners, particularly since most subsea cables connect different countries. He describes having 24/7 communication lines that enable immediate contact within minutes or seconds when incidents occur.


Evidence

Cables connecting countries (Latvia-Sweden example), established communication line to NOX 24/7, knowing whom to talk to within minutes or seconds, documented and trained procedures


Major discussion point

Crisis Management and Communication


Topics

Infrastructure | Legal and regulatory


Preparation through table exercises and drills, though real incidents provide irreplaceable learning

Explanation

Taube advocates for comprehensive preparation including table exercises, procedures, algorithms, and drills, while acknowledging that actual incidents provide learning experiences that cannot be replicated in simulations. He references the principle that while plans may not survive contact with reality, the planning process itself is invaluable.


Evidence

Table exercises, procedures, algorithms, spare parts preparation, reference to Truman quote about plans going to waste but planning being important, practical lessons cannot compare to table exercises


Major discussion point

Crisis Management and Communication


Topics

Infrastructure | Cybersecurity


Agreed with

– Camino Kavanagh
– Sandra Maximiano

Agreed on

Preparation and Planning Importance


S

Sandra Maximiano

Speech speed

120 words per minute

Speech length

1547 words

Speech time

773 seconds

Building redundancy through multiple geographical diverse cable routes and avoiding strategic choke points

Explanation

Maximiano outlines key technical approaches to building resilience, emphasizing the importance of establishing multiple geographically diverse cable routes and alternative connections. She advocates for avoiding strategic choke points that could create vulnerabilities and deploying enhanced protection measures in high-risk areas.


Evidence

Multiple geographical diverse cable routes, alternative routes including satellite backups and terrestrial connections, avoiding strategic choke points, deploying armored cables and burying cables deeper in high-risk areas


Major discussion point

Resilience by Design and Redundancy


Topics

Infrastructure | Cybersecurity


Agreed with

– Bosun Tijani
– Steinar Bjornstad

Agreed on

Resilience by Design Philosophy


Need for collective mechanisms to support repair capacity, especially for regions lacking resources

Explanation

Maximiano emphasizes the importance of developing collective support mechanisms for cable repair, particularly for regions and countries that lack the resources to respond independently. She highlights this as especially critical for island states and remote regions that may be more vulnerable.


Evidence

Collective mechanisms for regions and countries lacking resources, particular importance for island states and remote regions


Major discussion point

Repair Capacity and Response Preparedness


Topics

Infrastructure | Development


Agreed with

– Evijs Taube
– Camino Kavanagh

Agreed on

Preparation and Planning Importance


ITU Advisory Body provides global platform for collaboration between public and private sectors

Explanation

Maximiano describes the ITU Advisory Body as a unique global platform that brings together diverse stakeholders from both public and private sectors across the world. She emphasizes how this partnership combines ITU’s capacity for worldwide dialogue with ICPC’s technical expertise in submarine cable resilience.


Evidence

40 outstanding personalities from public and private sectors globally, partnership between ITU and ICPC, countries ranging from large economies to small island states, Abuja Declaration in February, three thematic working groups for 2025-26


Major discussion point

International Cooperation and Governance


Topics

Infrastructure | Legal and regulatory


Agreed with

– Karianne Tung
– Jarno Syrjala
– Bosun Tijani
– Liisa-Ly Pakosta

Agreed on

International Cooperation Necessity


Need for regulation to keep pace with technological innovation and high-capacity connectivity demands

Explanation

Maximiano argues that regulatory frameworks must evolve at the same pace as technological innovation, particularly given the demands of AI and high-capacity connectivity. She emphasizes that ANACOM is actively working to ensure regulatory frameworks can anticipate infrastructure bottlenecks and enable sustainable connectivity.


Evidence

AI training and deployment demanding massive computational capacity and energy-intensive data centers, ANACOM monitoring trends to ensure regulatory framework anticipates bottlenecks


Major discussion point

Regulatory Framework and Best Practices


Topics

Infrastructure | Legal and regulatory


Small island states and remote regions need special attention where economic incentives for response are lower

Explanation

Maximiano highlights the particular vulnerability of small island states and remote regions, where economic incentives for maintaining prompt response mechanisms may be insufficient. She argues that while these regions may have lower economic incentives, their response capacity is important for global connectivity and requires collaborative support.


Evidence

Economic incentives for prompt response mechanisms are lower in small states and remote islands, but response is important for everyone


Major discussion point

Regional and Global Coordination Challenges


Topics

Infrastructure | Development


K

Kent Bressie

Speech speed

124 words per minute

Speech length

1768 words

Speech time

848 seconds

Need for better awareness and communication between submarine cable operators, marine industries, and governments

Explanation

Bressie emphasizes that effective cable protection requires ongoing dialogue and communication among all stakeholders at national, regional, and multilateral levels. He stresses that this is not a one-time effort but requires continuous engagement to ensure all parties understand their roles and responsibilities.


Evidence

Never-ending tasks requiring ongoing dialogue at all levels, need for understanding between industry and government roles


Major discussion point

Public-Private Partnership Requirements


Topics

Infrastructure | Legal and regulatory


Agreed with

– Karianne Tung
– Jarno Syrjala

Agreed on

Public-Private Partnership Requirements


Governments need to understand what industry does and recognize actions only governments can take

Explanation

Bressie argues that effective public-private partnerships require governments to understand industry’s existing protection and resilience efforts while recognizing the unique actions that only governments can take, particularly political and military responses to intentional damage. Some tasks are shared between industry and government.


Evidence

Industry already promotes cable protection and resilience in design and operation, governments uniquely positioned for political and military responses to intentional damage


Major discussion point

Public-Private Partnership Requirements


Topics

Infrastructure | Legal and regulatory


ICPC best practices advocate holistic approach including default separation distances and single government contact points

Explanation

Bressie describes the ICPC’s comprehensive best practices document that provides specific recommendations for governments. The practices advocate for a holistic approach to risk management and include practical measures like maintaining separation distances from other marine activities and establishing clear government contact points.


Evidence

12-page user-friendly best practices document, default separation distances between cables and other marine activities, single point of contact within national governments, cable protection laws, minimization of restrictions and fees


Major discussion point

Regulatory Framework and Best Practices


Topics

Infrastructure | Legal and regulatory


Government policies can potentially undermine cable protection through excessive delays and clustering requirements

Explanation

Bressie warns that well-intentioned government regulations can inadvertently harm cable security. He identifies specific problematic policies including national security regulations that create massive delays for permits and regulations that force cables into narrow corridors, which can increase vulnerability to single-event damage.


Evidence

National security-oriented regulation creating massive delays for installation and repair permits, regulations encouraging clustering of cables in narrow corridors, increased risk of single event damaging multiple cables


Major discussion point

Regulatory Framework and Best Practices


Topics

Infrastructure | Legal and regulatory


Removing cables from nautical charts is misguided and would increase risks from fishing and anchoring

Explanation

Bressie strongly opposes efforts by some governments to remove cable locations from nautical charts, arguing this would significantly increase the primary causes of cable damage. He explains that since approximately 70% of cable damage is caused by fishing and anchoring activities, removing cables from charts would make the problem worse and complicate damage claims.


Evidence

Approximately 70% of cable damage caused by fishing and anchors, removing cables would increase risks and make damage claims impossible


Major discussion point

Regulatory Framework and Best Practices


Topics

Infrastructure | Legal and regulatory


Disagreed with

Disagreed on

Approach to cable location transparency vs. security


C

Camino Kavanagh

Speech speed

147 words per minute

Speech length

934 words

Speech time

378 seconds

Historical perspective shows 60% natural causes, 35% unintentional accidents, 5% malicious activity, but intentional threats are increasing

Explanation

Kavanagh provides historical context from the 1881-1884 period showing that the fundamental causes of cable damage have remained relatively consistent over 143 years. However, she notes that while the basic statistics haven’t changed dramatically, there is growing concern about state-backed interventions and intentional damage, particularly in the European context.


Evidence

1881 statistics from North Sea: 60% natural events, 35% unintentional acts/force majeure, 5% gross negligence and malign activities; reference to World War I period increase in state-backed interventions; Nord Stream and Baltic Sea incidents


Major discussion point

Evolving Threat Landscape and Security Concerns


Topics

Infrastructure | Cybersecurity


Different regions experience very different threat landscapes and problem sets

Explanation

Kavanagh emphasizes that while European regions may be experiencing increased intentional threats, other regions face very different challenges. This diversity in regional threat landscapes makes coordination and regulatory alignment particularly difficult, as different areas require different approaches to cable protection.


Evidence

European context differs significantly from other regions, different regions experiencing very different problems, coordination challenges due to different problem sets


Major discussion point

Evolving Threat Landscape and Security Concerns


Topics

Infrastructure | Legal and regulatory


S

Session video

Speech speed

124 words per minute

Speech length

201 words

Speech time

96 seconds

Distributed acoustic sensing can turn fiber optic cables into virtual hydrophones for ocean monitoring

Explanation

The video demonstrates how distributed acoustic sensing (DAS) technology works by injecting light pulses into fiber cables and analyzing backscattered light to detect acoustic pressure fields. This technology can transform existing fiber optic cables into tens of thousands of virtual hydrophones, enabling comprehensive ocean monitoring.


Evidence

Light pulses injected into fiber cable, backscattered light analysis, acoustic sources like whales creating pressure fields that stretch and compress fiber, tens of thousands of virtual hydrophones created


Major discussion point

Technical Solutions and Innovation


Topics

Infrastructure | Cybersecurity


Existing global fiber optic cable network can serve as comprehensive monitoring system for multiple applications

Explanation

The video argues that the more than 1 million kilometers of existing fiber optic cables worldwide could be leveraged as a global monitoring system. This system could provide insights beyond ocean monitoring, including understanding earthquake mechanisms, landslide risks, avalanches, and floods.


Evidence

More than 1 million kilometers of fiber optic cables globally, applications for earthquakes, landslides, avalanches, floods monitoring


Major discussion point

Technical Solutions and Innovation


Topics

Infrastructure | Cybersecurity


DAS technology provides immediate data availability at cable endpoints for real-time monitoring

Explanation

The video emphasizes that distributed acoustic sensing provides immediate access to monitoring data at the shore end of cables. This real-time capability enables continuous monitoring and immediate response to detected events or threats.


Evidence

Data available immediately ashore at the end of the cable, real-time ocean listening capability


Major discussion point

Technical Solutions and Innovation


Topics

Infrastructure | Cybersecurity


Agreements

Agreement points

Critical Infrastructure Dependency

Speakers

– Giacomo Persi Paoli
– Karianne Tung
– Bosun Tijani
– Liisa-Ly Pakosta

Arguments

Subsea cables carry over 99% of global intercontinental data traffic, making them critical digital infrastructure


Digital society is completely dependent on submarine cables for healthcare, education, transport systems


Submarine cables are not just technical assets but the most important critical infrastructure globally


Estonia as a fully digital state faces actual threats to government services when cables are cut


Summary

All speakers unanimously agree that subsea cables represent critical infrastructure that modern digital society cannot function without, carrying the vast majority of global data traffic and supporting essential services


Topics

Infrastructure | Cybersecurity


International Cooperation Necessity

Speakers

– Karianne Tung
– Jarno Syrjala
– Bosun Tijani
– Liisa-Ly Pakosta
– Sandra Maximiano

Arguments

Cross-border cooperation is crucial since submarine cables cross national borders and international waters


International cooperation through NATO, EU, and ITU helps build resilience and response capabilities


Many countries and regions lack expertise and frameworks to address cable protection


Need for universal set of rules to protect citizens across all continents


ITU Advisory Body provides global platform for collaboration between public and private sectors


Summary

Strong consensus that submarine cable protection requires extensive international cooperation due to the cross-border nature of the infrastructure and varying national capabilities


Topics

Infrastructure | Legal and regulatory


Public-Private Partnership Requirements

Speakers

– Karianne Tung
– Jarno Syrjala
– Kent Bressie

Arguments

Close cooperation between private sector and civil/defense authorities maximizes knowledge and strength


Solid public-private partnership is one of the most important aspects of telecommunications resilience


Need for better awareness and communication between submarine cable operators, marine industries, and governments


Summary

Clear agreement that effective subsea cable protection requires strong partnerships between government and private sector, combining their respective expertise and capabilities


Topics

Infrastructure | Legal and regulatory


Resilience by Design Philosophy

Speakers

– Bosun Tijani
– Sandra Maximiano
– Steinar Bjornstad

Arguments

Resilience should be intentional and built into design, not an afterthought


Building redundancy through multiple geographical diverse cable routes and avoiding strategic choke points


Multiple cables and optical switching enable quick traffic rerouting when cables fail


Summary

Consensus that resilience must be intentionally designed into cable systems from the beginning, incorporating redundancy and multiple pathways rather than being added as an afterthought


Topics

Infrastructure | Cybersecurity


Preparation and Planning Importance

Speakers

– Evijs Taube
– Camino Kavanagh
– Sandra Maximiano

Arguments

Preparation through table exercises and drills, though real incidents provide irreplaceable learning


Prepare and exercise your preparedness


Need for collective mechanisms to support repair capacity, especially for regions lacking resources


Summary

Strong agreement that effective cable protection requires extensive advance preparation, including exercises, drills, and pre-positioned resources for rapid response


Topics

Infrastructure | Cybersecurity


Similar viewpoints

These speakers share the view that the threat landscape has fundamentally changed, with increased intentional attacks on subsea cables, particularly in the context of current geopolitical tensions and conflicts

Speakers

– Karianne Tung
– Liisa-Ly Pakosta
– Camino Kavanagh

Arguments

Recent incidents with damages to subsea cables in Baltic Sea and Red Sea highlight increased vulnerability


Dramatic rise in ‘accidents’ during full-scale war in Ukraine, with intentional cable cutting by Russian shadow fleet


Historical perspective shows 60% natural causes, 35% unintentional accidents, 5% malicious activity, but intentional threats are increasing


Topics

Cybersecurity | Infrastructure


Both emphasize the potential of advanced sensing technologies, particularly fiber sensing and distributed acoustic sensing, to transform cables into comprehensive monitoring systems

Speakers

– Steinar Bjornstad
– Session video

Arguments

Combining multiple monitoring tools including AIS information and fiber sensing provides comprehensive situational awareness


Distributed acoustic sensing can turn fiber optic cables into virtual hydrophones for ocean monitoring


Topics

Infrastructure | Cybersecurity


Both speakers highlight the disparity in global capabilities for cable protection, with particular concern for developing countries and small island states that lack resources and expertise

Speakers

– Bosun Tijani
– Sandra Maximiano

Arguments

Many countries and regions lack expertise and frameworks to address cable protection


Small island states and remote regions need special attention where economic incentives for response are lower


Topics

Infrastructure | Development


Unexpected consensus

Technology Integration for Monitoring

Speakers

– Steinar Bjornstad
– Evijs Taube
– Session video

Arguments

Fiber sensing technology allows cables to work as underwater microphones, detecting approaching threats


Importance of established communication lines with international partners and 24/7 contact protocols


Existing global fiber optic cable network can serve as comprehensive monitoring system for multiple applications


Explanation

Unexpected strong consensus emerged around leveraging existing cable infrastructure for comprehensive monitoring beyond just communication purposes, including environmental monitoring and threat detection. This represents a shift from viewing cables purely as communication infrastructure to seeing them as multi-purpose sensing networks


Topics

Infrastructure | Cybersecurity


Regulatory Framework Challenges

Speakers

– Kent Bressie
– Sandra Maximiano

Arguments

Government policies can potentially undermine cable protection through excessive delays and clustering requirements


Need for regulation to keep pace with technological innovation and high-capacity connectivity demands


Explanation

Unexpected consensus that well-intentioned government regulations can actually harm cable security, with both speakers acknowledging that regulatory frameworks need to be carefully designed to support rather than hinder cable protection efforts


Topics

Infrastructure | Legal and regulatory


Overall assessment

Summary

The discussion revealed remarkably strong consensus across all speakers on fundamental issues: the critical importance of subsea cables to modern society, the necessity of international cooperation, the requirement for public-private partnerships, and the need for intentional resilience design. There was also broad agreement on the changing threat landscape and the importance of preparation and planning.


Consensus level

Very high level of consensus with no significant disagreements identified. This strong alignment suggests the subsea cable protection community has developed shared understanding of challenges and solutions, which bodes well for coordinated international action. The consensus spans technical, policy, and governance dimensions, indicating mature thinking about this critical infrastructure challenge.


Differences

Different viewpoints

Approach to cable location transparency vs. security

Speakers

– Kent Bressie

Arguments

Removing cables from nautical charts is misguided and would increase risks from fishing and anchoring


Summary

Kent Bressie strongly opposes government efforts to remove cable locations from nautical charts for security reasons, arguing this would increase the primary causes of damage (70% from fishing/anchoring). However, no other speakers directly addressed this specific policy debate, suggesting potential disagreement exists but wasn’t explicitly debated.


Topics

Infrastructure | Legal and regulatory


Unexpected differences

Limited explicit debate on regulatory approaches

Speakers

– Kent Bressie
– Sandra Maximiano

Arguments

Government policies can potentially undermine cable protection through excessive delays and clustering requirements


Need for regulation to keep pace with technological innovation and high-capacity connectivity demands


Explanation

Unexpectedly, there was minimal debate about the balance between security-focused regulation and operational efficiency. Kent Bressie warned about over-regulation creating delays and vulnerabilities, while Sandra Maximiano advocated for proactive regulatory frameworks. This fundamental tension between security and efficiency wasn’t directly addressed or debated.


Topics

Infrastructure | Legal and regulatory


Overall assessment

Summary

The discussion showed remarkable consensus on key issues: the critical importance of subsea cables, need for international cooperation, public-private partnerships, resilience by design, and preparation for incidents. The few areas of potential disagreement were not directly debated.


Disagreement level

Very low disagreement level. This consensus likely reflects the technical and collaborative nature of the subsea cable community, but may also indicate insufficient exploration of challenging policy trade-offs. The high level of agreement could facilitate implementation of recommended measures, but might also suggest that more difficult questions about resource allocation, regulatory balance, and competing priorities need deeper examination in future discussions.


Partial agreements

Partial agreements

Similar viewpoints

These speakers share the view that the threat landscape has fundamentally changed, with increased intentional attacks on subsea cables, particularly in the context of current geopolitical tensions and conflicts

Speakers

– Karianne Tung
– Liisa-Ly Pakosta
– Camino Kavanagh

Arguments

Recent incidents with damages to subsea cables in Baltic Sea and Red Sea highlight increased vulnerability


Dramatic rise in ‘accidents’ during full-scale war in Ukraine, with intentional cable cutting by Russian shadow fleet


Historical perspective shows 60% natural causes, 35% unintentional accidents, 5% malicious activity, but intentional threats are increasing


Topics

Cybersecurity | Infrastructure


Both emphasize the potential of advanced sensing technologies, particularly fiber sensing and distributed acoustic sensing, to transform cables into comprehensive monitoring systems

Speakers

– Steinar Bjornstad
– Session video

Arguments

Combining multiple monitoring tools including AIS information and fiber sensing provides comprehensive situational awareness


Distributed acoustic sensing can turn fiber optic cables into virtual hydrophones for ocean monitoring


Topics

Infrastructure | Cybersecurity


Both speakers highlight the disparity in global capabilities for cable protection, with particular concern for developing countries and small island states that lack resources and expertise

Speakers

– Bosun Tijani
– Sandra Maximiano

Arguments

Many countries and regions lack expertise and frameworks to address cable protection


Small island states and remote regions need special attention where economic incentives for response are lower


Topics

Infrastructure | Development


Takeaways

Key takeaways

Subsea cables carrying 99% of global intercontinental data are critical infrastructure requiring urgent protection due to increasing intentional threats, particularly from geopolitical tensions and hybrid warfare


Resilience must be built by design through four key pillars: protection, planning, preparedness, and response – it cannot be improvised or treated as an afterthought


Strengthening subsea cable security is a ‘team sport’ requiring coordinated multi-stakeholder cooperation between governments, industry, academia, and international organizations


Public-private partnerships are essential, with governments needing to understand industry capabilities while taking actions only they can perform (political/military responses)


Technical innovations like fiber sensing and distributed acoustic sensing are revolutionizing threat detection and situational awareness for cable monitoring


Regional cooperation frameworks (North Sea 2024, Baltic Sea 2025) demonstrate effective models for cross-border collaboration on cable protection


Redundancy and route diversity are critical for resilience, with countries needing multiple cable connections rather than single points of failure


Repair capacity and preparedness require significant investment in specialized vessels, equipment, spare parts, and trained personnel


Different regions face vastly different threat landscapes, requiring tailored approaches while maintaining global coordination standards


Resolutions and action items

ITU Advisory Body for Submarine Cable Resilience established three working groups for 2025-26: resilience by design, timely deployment/repair, and risk identification/monitoring


Abuja Declaration approved in February 2025 as milestone for international cooperation on submarine cable resilience


Countries committed to implementing EU Action Plan on Cable Security with four objectives: prevention, detection, response/repair, and deterrence


Multiple countries signed New York Declaration on Submarine Cable Security to promote integrity and accessibility


Norway establishing dedicated cooperation between private sector and civil/defense authorities with clarified roles and responsibilities


Nigeria setting up dedicated desk within communications commission for cable protection protocols and international coordination


Finland transposing NIS2 directive into national law (April 2025) with comprehensive telecommunications resilience requirements


Unresolved issues

Limited repair capacity globally, particularly shortage of specialized vessels and trained personnel for cable maintenance


Lack of adequate frameworks and expertise in many developing countries and small island states for cable protection


Insufficient economic incentives for prompt response mechanisms in remote regions where commercial viability is lower


Debate over removing cables from nautical charts for security versus safety concerns (industry strongly opposes removal)


Challenges in attributing responsibility for cable incidents and distinguishing between intentional and unintentional damage


Regulatory delays and bureaucratic obstacles that can undermine cable protection and repair efforts


Workforce development challenges in attracting young talent to the submarine cable industry


Coordination difficulties across different jurisdictions and legal frameworks for international cable systems


Suggested compromises

Collective mechanisms to support repair capacity for regions and countries lacking resources, with shared investment in repair vessels and joint capacity


Balanced approach to cable route planning that avoids both strategic choke points and excessive clustering while meeting connectivity needs


Graduated response protocols that prioritize critical infrastructure restoration based on national security and public welfare importance


Flexible licensing and permitting procedures that balance security requirements with operational efficiency for repairs


Regional cooperation models that can be adapted to different geographic and political contexts while maintaining core protection principles


Public-private information sharing frameworks that protect sensitive operational details while enabling effective threat response


Technology sharing arrangements where advanced countries assist developing nations with monitoring capabilities and expertise transfer


Thought provoking comments

Resilience should be intentional. It shouldn’t be something that is afterthought… I was surprised at how, of course, we were fortunate, the private sector… came together. But as a minister, I didn’t have any answer to give to people. And people don’t often complain about companies when you have natural disasters. It’s the government that they look to for answer.

Speaker

Bosun Tijani (Minister of Communications, Innovation and Digital Economy of Nigeria)


Reason

This comment fundamentally reframed the discussion from technical protection measures to governance accountability. Tijani’s personal experience during the West African cable cuts revealed a critical gap between technical preparedness and political responsibility, highlighting how governments are held accountable for infrastructure failures regardless of ownership structures.


Impact

This shifted the conversation toward the need for proactive government frameworks and sparked subsequent discussions about public-private partnerships, regulatory preparedness, and the importance of having clear protocols before incidents occur. It influenced other speakers to emphasize planning and preparedness rather than just reactive measures.


Let us remember that it was 1884 when the Paris Convention of Undersea Telegraphic Cables was agreed… So this is actually the situation where we are just now, as well, within the broader geopolitical situation. What we see around this area… that the Russian shadow fleet is cutting down our connections.

Speaker

Liisa-Ly Pakosta (Minister of Justice and Digital Affairs of Estonia)


Reason

This historical parallel was profound because it connected current geopolitical tensions to a 140-year pattern of intentional cable disruption during conflicts. By referencing the 1884 convention, Pakosta demonstrated that cable protection challenges aren’t new but have evolved with geopolitical contexts, directly naming current threat actors.


Impact

This comment elevated the discussion from technical and regulatory issues to explicit geopolitical framing, legitimizing direct discussion of state-sponsored threats. It influenced the tone of subsequent technical discussions by establishing the current security environment as fundamentally different from peacetime operations.


Statistics from 1882: 60% of damage caused by natural events, 35% by unintentional acts due to accidents at sea or force majeure, and 5% due to gross negligence and some malign activities… those statistics wouldn’t have changed very much, although… the stats between natural causes and unintentional damage… would slightly change… it’s very hard to ascertain responsibility for some of the incidents.

Speaker

Camino Kavanagh (UNIDIR expert)


Reason

This historical data analysis was intellectually striking because it revealed the consistency of threat patterns across 140+ years while highlighting the fundamental challenge of attribution in cable incidents. It provided empirical grounding for policy discussions while acknowledging the inherent difficulty in distinguishing between intentional and unintentional damage.


Impact

This comment provided crucial context that influenced how other speakers framed their responses, moving away from assumptions about threat prevalence toward evidence-based discussions. It also highlighted the attribution challenge that became a recurring theme in technical monitoring discussions.


We see a renewed push by some governments to remove cables from nautical charts. This is woefully misguided. Given that approximately 70 percent of cable damage each year is caused by fishing and anchors, removing cables from nautical charts would significantly increase those risks and make it impossible for cable owners to pursue damages claims.

Speaker

Kent Bressie (ICPC legal advisor)


Reason

This comment challenged a counterintuitive security approach that could actually increase vulnerabilities. It demonstrated how security-through-obscurity thinking can backfire in maritime infrastructure, where transparency actually enhances protection by enabling avoidance of accidental damage.


Impact

This practical insight influenced the discussion toward evidence-based security measures rather than intuitive but potentially counterproductive approaches. It reinforced the theme that effective protection requires understanding actual threat vectors rather than theoretical ones.


Every cable, existing cable is a big asset, and we can call it a big sensor. If we install… distributed or centralized… integrated system of such sensors in a particular area, for example, Baltic Sea… that would give a big benefit, not only protecting the cables, but to understand what is going on under the water.

Speaker

Evijs Taube (Latvia State Radio and Television Center)


Reason

This comment introduced a paradigm shift from viewing cables as passive infrastructure to active sensing networks. It suggested transforming the problem from protecting vulnerable assets to creating a comprehensive underwater surveillance system, turning the infrastructure itself into a security solution.


Impact

This technical insight influenced the discussion toward dual-use technologies and comprehensive situational awareness. It connected to earlier discussions about distributed acoustic sensing and elevated the conversation from individual cable protection to regional maritime domain awareness.


We need a combination of national, regional, and international cooperation to achieve effective resilience measures… Threats to subsea communication cables are not limited by national borders, so international cooperation is vital for protection of subsea cables.

Speaker

Karianne Tung (Norwegian Minister of Digitalisation)


Reason

While cooperation was mentioned throughout, Tung’s framing established the multi-level governance structure needed for transnational infrastructure. Her concrete examples of North Sea and Baltic Sea cooperation agreements provided practical models for how abstract cooperation principles could be operationalized.


Impact

This set the framework for the entire discussion, with subsequent speakers building on the multi-stakeholder, multi-level cooperation theme. It influenced how other participants framed their national experiences within broader international contexts.


Overall assessment

These key comments fundamentally shaped the discussion by establishing three critical frameworks: (1) the historical continuity of cable threats with contemporary geopolitical urgency, (2) the shift from reactive technical protection to proactive governance accountability, and (3) the transformation of cables from passive infrastructure to active sensing networks. The most impactful insight was Tijani’s reframing of resilience as intentional governance responsibility, which influenced subsequent speakers to emphasize preparedness and planning over purely technical solutions. The historical perspectives from Pakosta and Kavanagh provided crucial context that legitimized current security concerns while grounding them in empirical evidence. Together, these comments elevated the discussion from a technical workshop to a strategic policy dialogue that balanced historical lessons, current geopolitical realities, and future technological possibilities.


Follow-up questions

How can we develop well-working mechanisms that are innovative and willing to experiment for protecting critical undersea infrastructure?

Speaker

Jarno Syrjala


Explanation

There’s a need for urgency in developing innovative technological solutions and experimental approaches to protect subsea cables, suggesting current methods may be insufficient


What is the optimal investment strategy for repair ships and talent given that cable incidents don’t happen frequently?

Speaker

Bosun Tijani


Explanation

The challenge of making calculated investments in repair capacity and skilled workforce when incidents are infrequent but critical when they occur needs further analysis


How can we better understand the statistics and data on malicious activities targeting subsea cables?

Speaker

Camino Kavanagh (implied by Giacomo Persi Paoli)


Explanation

There’s limited visibility into unsuccessful malicious attempts against subsea cables, making it difficult to assess the true scope of intentional threats


How can countries with limited resources and capacities invest in advanced monitoring technologies like DAS (Distributed Acoustic Sensing)?

Speaker

Camino Kavanagh


Explanation

Small governments and countries have limited resources to invest in expensive monitoring technologies, creating gaps in global protection coverage


How can we address the workforce challenges in the subsea cable industry to attract young talent?

Speaker

Camino Kavanagh


Explanation

Both industry and governments face difficulties in attracting young professionals to work in the subsea cable sector, which is critical for long-term resilience


How can we better integrate crisis management and emergency planning across different elements of subsea cable systems (submersed parts, network layer, supply chain, repair fleet)?

Speaker

Camino Kavanagh


Explanation

No government has yet successfully integrated all aspects of subsea cable protection into comprehensive crisis management systems


How can we ensure regulatory frameworks keep pace with rapid technological changes, particularly AI and high-capacity connectivity demands?

Speaker

Sandra Maximiano


Explanation

The rapid evolution of technology, especially AI requiring massive computational capacity, is outpacing regulatory frameworks designed to ensure cable resilience


How can we address cable damage by dark fleet ships through international mechanisms like the IMO?

Speaker

Kent Bressie


Explanation

The international community has not yet effectively addressed the threat posed by dark fleet vessels that can damage cables while operating without proper identification


How can we implement integrated distributed acoustic sensing systems across compact sea areas like the Baltic Sea?

Speaker

Evijs Taube


Explanation

Creating a comprehensive underwater monitoring network using existing cables as sensors could provide better situational awareness but requires coordination and technical implementation


How can we develop mechanisms to ensure continuity of dialogue between government and industry stakeholders despite personnel rotation?

Speaker

Giacomo Persi Paoli


Explanation

The challenge of maintaining effective public-private partnerships when government personnel frequently rotate to different positions needs systematic solutions


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.