WS #187 Bridging Internet AI Governance From Theory to Practice
25 Jun 2025 14:15h - 15:30h
WS #187 Bridging Internet AI Governance From Theory to Practice
Session at a glance
Summary
This joint session of the Dynamic Coalition on Core Internet Values and the Dynamic Coalition on Network Neutrality explored how the internet’s foundational principles can guide AI governance as artificial intelligence becomes increasingly central to digital interactions. The discussion centered on two key questions: how internet principles of openness and decentralization can inform transparent AI governance, and how network neutrality concepts like generativity and fair competition can apply to AI infrastructure and content creation.
Vint Cerf emphasized that while the internet and AI are “different beasts,” AI systems should prioritize safety, transparency, and provenance of training data. He highlighted emerging standards like agent-to-agent protocols that could enable interoperability between AI systems. Sandrine Elmi Hersi from France’s ARCEP outlined three areas for applying internet values to AI: accelerating transparency in AI models, preserving distributed intelligence rather than centralized control, and extending non-discrimination principles to AI infrastructure and content curation.
Renata Mielli from Brazil’s CGI noted that while some internet governance principles like freedom and interoperability can transfer to AI, others like net neutrality may not directly apply since AI systems are inherently non-neutral. Hadia Elminiawi discussed Africa’s AI strategy and raised practical questions about implementing transparency requirements, suggesting that requiring open-source safety guardrails might be more feasible than full model transparency.
Several participants emphasized the challenge of market concentration in AI, contrasting it with the internet’s originally decentralized architecture. The discussion revealed tensions between promoting innovation and ensuring accountability, with speakers noting the need for risk-based approaches, liability frameworks, and multi-stakeholder governance. The session concluded with calls for transforming these principles into technical standards and regulatory frameworks while maintaining the collaborative spirit that made internet governance successful.
Keypoints
## Major Discussion Points:
– **Fundamental architectural differences between Internet and AI**: The discussion emphasized that while the Internet was built on open, decentralized, transparent, and interoperable principles, AI systems (particularly large language models) operate through centralized, opaque, and proprietary architectures controlled by a handful of major companies, creating tension between these two paradigms.
– **Applying Internet governance principles to AI governance**: Speakers explored how core Internet values like openness, transparency, non-discrimination, and net neutrality could be translated into AI governance frameworks, while acknowledging that some principles (like technical neutrality) may not directly apply since AI systems are inherently non-neutral.
– **Market concentration and gatekeeper concerns**: Multiple speakers highlighted the risk of AI systems becoming new gatekeepers that could limit user choice and content diversity, drawing parallels to earlier Internet governance challenges around platform dominance and the need for regulatory oversight to preserve competition and openness.
– **Global South representation and digital equity**: The discussion addressed how AI governance frameworks must include diverse global perspectives, particularly from Africa, Latin America, and Asia, to avoid replicating the digital divides and power imbalances that have characterized Internet development.
– **Practical implementation challenges**: Speakers debated the realistic prospects for international cooperation on AI governance, questioning whether major AI companies and governments have sufficient incentives to participate in multilateral governance frameworks, and emphasizing the need for risk-based approaches, liability frameworks, and technical standards.
## Overall Purpose:
The discussion aimed to bridge Internet governance principles with emerging AI governance challenges, exploring how decades of experience regulating Internet infrastructure and services could inform approaches to governing artificial intelligence systems. The session sought to move beyond theoretical frameworks toward practical implementation strategies for ensuring AI development remains aligned with values of openness, transparency, and user empowerment.
## Overall Tone:
The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers drawing encouraging parallels between Internet and AI governance challenges. However, the tone became more realistic and somewhat pessimistic as participants acknowledged significant obstacles, including corporate resistance to regulation, geopolitical tensions, market concentration, and the fundamental differences between Internet and AI architectures. Despite these challenges, the session concluded on a pragmatic note, with calls for continued collaboration and specific next steps for the working groups involved.
Speakers
**Speakers from the provided list:**
– **Olivier Crepin-Leblond** – Co-chair of the session, moderator for remote participation
– **Pari Esfandiari** – Co-chair for the Dynamic Coalition on Core Internet Values
– **Luca Belli** – Co-chair for the Dynamic Coalition on Network Neutrality
– **Vint Cerf** – Joining remotely from the US, works with a company that has invested heavily in AI and AI-based services, co-creator of internet networking protocols
– **Sandrine ELMI HERSI** – Representative from ARCEP (French regulatory authority for electronic communications), involved in shaping digital strategies within government
– **Renata Mielli** – Coordinator of CGI.br (Brazilian Internet Steering Committee), leading debates on net neutrality, internet openness and AI issues in Brazil
– **Hadia Elminiawi** – Representative from the African continent, discussing AI governance from African perspective
– **William Drake (Bill Drake)** – Commenter/additional speaker
– **Roxana Radu** – Commenter/additional speaker (participating remotely)
– **Shuyan Wu** – Representative from China Mobile (world’s largest telecom operator), commenter/additional speaker
– **Yik Chan Ching** – Representative from PNAI (Policy Network on Artificial Intelligence), intersectional process of IGF
– **Alejandro Pisanty** – Online participant, previously involved in core internet values dynamic coalition discussions
– **Audience** – Various audience members who asked questions (including Dominique Hazel Monsieur from W3C, and Andrew Campling – internet standards and governance enthusiast)
**Additional speakers:**
– **Dominique Hazel Monsieur** – Works for W3C (World Wide Web Consortium), oversees work around AI and its impact on the web
– **Andrew Campling** – Internet standards and internet governance enthusiast
Full session report
# Bridging Internet Core Values and AI Governance: A Comprehensive Report
## Executive Summary
This joint session of the Dynamic Coalition on Core Internet Values and the Dynamic Coalition on Network Neutrality examined how established internet governance principles might inform emerging AI governance frameworks. Moderated by Olivier Crepin-Leblond and co-chaired by Pari Esfandiari and Luca Belli, the discussion brought together international experts to explore the intersection between internet governance and AI systems.
The session revealed both opportunities and challenges in applying internet principles to AI governance. While speakers agreed on the importance of values like transparency and safety, they identified fundamental differences between the internet’s distributed architecture and AI’s centralized model. The discussion produced practical recommendations including risk-based governance approaches, technical standards development, and targeted interventions at AI-internet intersection points.
## Opening Framework and Central Questions
Pari Esfandiari opened by establishing the session’s premise: as generative AI becomes a primary gateway to content, internet core values must guide AI governance. She posed two key questions: how can internet principles of openness and decentralization inform transparent AI governance, and how can network neutrality concepts apply to AI infrastructure and content creation.
Luca Belli immediately introduced a fundamental tension, observing that “the Internet and AI are two different beasts.” He noted that while celebrating 51 years since foundational internet work, the internet was built on open, decentralized, transparent, and interoperable architecture, whereas AI operates through highly centralized architecture controlled by major companies. This architectural difference became a recurring theme throughout the session.
## Expert Perspectives
### Vint Cerf: Technical Standards and Safety
Vint Cerf, joining remotely, emphasized that AI systems should prioritize safety, transparency, and provenance of training data. He highlighted ongoing work on agent-to-agent (A2A) protocols and model context protocols (MCP) to ensure interoperability between AI systems, drawing parallels to internet protocols.
Cerf challenged purely centralized views of AI, noting that “every time someone interacts with one of those [large language models], they are specializing it to their interests and their needs.” He advocated for risk-based approaches focusing on user risk and provider liability, with higher safety standards for high-risk applications like medical diagnosis and financial advice.
### Sandrine Elmi Hersi: Regulatory Framework
Representing ARCEP (French regulatory authority), Elmi Hersi outlined a three-pronged approach: accelerating transparency in AI models to make “black boxes” more auditable; preserving distributed intelligence by ensuring plurality of access to AI development inputs; and extending non-discrimination principles from network neutrality to AI infrastructure and content curation.
She raised particular concerns about content diversity, questioning how to ensure diversity when AI chatbots provide single answers instead of hundreds of web pages traditionally offered by search engines.
### Renata Mielli: Brazilian Perspective
Coordinator of CGI.br (Brazilian Internet Steering Committee), Mielli noted that while some internet governance principles like freedom and interoperability can transfer to AI, others like net neutrality may not directly apply since AI systems are inherently non-neutral, unlike internet infrastructure.
She emphasized transforming principles into technical standards while distinguishing between governance and regulation, and highlighted the need to reduce asymmetries and empower Global South voices in AI governance discussions.
### Hadia Elminiawi: African and Practical Perspective
Elminiawi provided insights from the African continent, noting that African countries’ AI capabilities vary significantly due to infrastructure, electricity, connectivity, and resource differences. She challenged idealistic transparency approaches, asking whether it is “realistic or even desirable to expect that all AI models be made fully open source.”
She suggested requiring open-source safety guardrails rather than full model transparency, proposing a more pragmatic approach balancing openness with security and investment concerns.
## Additional Interventions and Perspectives
### William Drake: Critical Analysis
Drake provided a critical intervention emphasizing the need to define precisely what aspects of AI require governance rather than applying generic principles. He questioned whether there is genuine functional demand for international AI governance, noting that “we simply can’t just assume, because the technology is there and the issues are there, that there’s a functional demand.”
He suggested developing a detailed mapping matrix of which internet properties apply to specific AI contexts and applications.
### Andrew Campling: Social Media Lessons
Campling suggested looking at social media governance lessons rather than internet governance, emphasizing duty of care and precautionary principles. He noted the importance of learning from past failures in social media regulation.
### Dominique Hazel Monsieur: W3C Standards Work
Representing W3C, Monsieur highlighted ongoing work on AI and web standards, focusing specifically on the intersection of AI and internet technologies rather than broader AI governance.
### Yik Chan Ching: Policy Network Perspective
From the Policy Network on Artificial Intelligence (PNAI), Ching mentioned ongoing research on liability, interoperability, and environmental protection in AI systems, noting significant progress in AI standards development across regions.
### Shuyan Wu: Digital Equity Focus
From China Mobile, Wu emphasized ensuring equal access, protecting user rights, and bridging digital divides in the AI era.
### Alejandro Pisanty: Commercial Reality
Participating online, Pisanty questioned fundamental incentive structures, asking “Why would OpenAI, Google, Meta, et cetera… why would they come together and agree to limit themselves in some way?” He advocated for applying existing rules for automated systems rather than creating entirely new frameworks.
## Key Themes and Challenges
### Architectural Differences
The fundamental difference between internet and AI architectures emerged as a central challenge. The internet’s distributed design contrasts sharply with AI’s concentrated ownership and control, creating new governance challenges.
### Market Concentration Concerns
Multiple speakers highlighted concerns about AI market concentration and the emergence of new gatekeepers that could limit user choice and content diversity, drawing parallels to earlier internet governance challenges.
### Transparency vs. Practicality
A significant tension emerged between calls for maximum transparency and practical constraints including investment protection and security concerns. Speakers debated appropriate levels and mechanisms for AI transparency.
### Global South Inclusion
Several speakers emphasized including Global South perspectives and addressing existing digital divides to prevent their reproduction in AI governance frameworks.
## Areas of Convergence
Despite disagreements, several areas of consensus emerged:
– **Risk-based approaches**: Multiple speakers supported prioritizing governance based on risk levels and application contexts
– **Technical standards importance**: Strong agreement on the need for AI interoperability standards
– **Safety and transparency needs**: General agreement that AI systems require more transparency than currently provided
– **Stakeholder inclusion**: Consensus on the importance of diverse participation in governance discussions
## Implementation Recommendations
The session produced several concrete recommendations:
### Continued Collaboration
Participants agreed to continue discussions through Dynamic Coalition mailing lists to address unresolved issues.
### Detailed Mapping Exercise
Drake’s suggestion for developing a mapping matrix of internet properties applicable to specific AI contexts was endorsed as a practical next step.
### Regulatory Development
ARCEP committed to completing its technical report on applying internet core values to AI governance.
### Focused Interventions
Rather than generic AI governance, speakers recommended focusing on AI-internet intersection points where governance needs and stakeholder incentives may be clearer.
## Unresolved Questions
The discussion concluded with acknowledgment of fundamental questions requiring further work:
– How to balance innovation incentives with transparency and accountability requirements
– Whether binding international AI agreements are feasible given current political realities
– How to address liability and responsibility in multi-agent AI systems
– What constitutes genuine functional demand for AI governance versus assumed need
## Conclusion
This session revealed both promise and challenges in applying internet governance principles to AI systems. While there was agreement on core values like safety and transparency, fundamental tensions emerged between internet and AI architectures, transparency ideals and practical constraints, and governance aspirations and commercial realities.
The discussion produced pragmatic recommendations focusing on risk-based approaches, technical standards development, and targeted interventions. However, unresolved tensions around transparency requirements, stakeholder participation, and international cooperation indicate significant work remains to develop effective AI governance frameworks that preserve internet values while addressing AI’s unique characteristics.
The session demonstrated the value of diverse international perspectives while highlighting the need for continued dialogue and practical experimentation to bridge the gap between principles and implementation in AI governance.
Session transcript
Olivier Crepin-Leblond: Right, welcome everybody to this session, this joint session of the Dynamic Coalition on Core Internet Values and the Dynamic Coalition on Network Neutrality. I’m Olivier Crepin-Leblanc, and co-chair of this session is going to be Luca Belli for the Dynamic Coalition on Network Neutrality and Pari Esfandiari for the Core Internet. It’s great to see such a lot of you here. As Luca said, if anybody wants to step up over to the table here, they’re very welcome to do so. We are going to have a session that’s going to be quite interactive. So we’ll have the speakers speak and so on, and then we’ll see if we can have a good discussion in the room about the topic. I’m just going to do just a quick introduction of the speakers that we have. And so we’ll start with four speakers, each providing their angle on the topic. We’ll have Vint Cerf, who’s joining us remotely. Unfortunately, he couldn’t make it in person at this IGF. So he’s over in the US and he will actually let us know at some point when he will be online, because he is also as often doing more than one session at the same time. Actually, I am. I am online. He’s already there. Goodness gracious. OK, sorry, Vint. I have two eyes, but they both look in the same direction. I don’t know why. I should have also checked the screen. So Vint Cerf, Hadi Alminiawi, and then we’ll have Renata Mielli also here and Sandrine ELMI HERSI, who’s sitting next to me. After that, we’ll have what we call additional speakers. They’ll be commenting on what they’ve heard from the original, from the first set of speakers. There’s three commenters, William Drake, Bill Drake, Roxana Radu and Shouyuan Wu, who’s just arrived from China. So the very last minute managed to make it here. So welcome to all of you. And then after that, we’ll open it to a wider discussion. But I’m kind of wasting time. We’ve only got 75 minutes, so I’m going to hand the floor straight over to Luca and to Pari for the next stage. Thank you.
Pari Esfandiari: Thank you very much Olivia and welcome everybody. It’s great to be here with all of you So we convene this session bridging internet and AI governance from theory to practice Not just because things are changing fast but because the way we think about digital governance is being fundamentally reshaped. As technologies converge and accelerate, our governance systems haven’t kept up and at the center of this shift is artificial intelligence Let’s start with theory, the internet’s core value global, interoperable, open, decentralized, end-to-end, robust and reliable and freedom from harm These were not just technical features, but deliberate design choices that made the internet a global common for innovation, diversity and human agency. Now comes generative AI It doesn’t just add another layer to the internet It introduces a fundamentally different architecture and logic. We are moving from open protocols to centralized models gated, opaque and controlled by a handful of actors. AI shifts the internet pluralism towards convergence replacing inquiry with predictive narration and reducing user agency This isn’t just a technical shift. It’s about who gets to define knowledge, shape, discourse and influence decisions. It’s a profound governance challenge and a societal choice about the kind of digital future we want. If we are serious about preserving user agency, democratic oversight and an open, informative ecosystem, the core internet values can serve as signposts to guide us, but it needs active support, updated policies and cross-sector commitment. This is where the practice begins. The good news is we are not starting from scratch from UNESCO’s AI ethics framework to the EU AI Act, the US AI Bill of Rights and efforts by Mozilla and others. We are seeing real momentum to root AI governance in shared fundamental values. So yes, there is a real divergence, but also real opportunities to shape what comes next. And that’s our focus today. With that, I will hand it over to my co-moderator, Luca Belli. Thank you.
Luca Belli: Thank you very much, Pari and Olivier. And also, let me hold this. Is this working? Yes. Yes. Okay. Thank you. Are you sure? Because I’m not hearing myself. Is this working? I am here. Can you hear us? Okay. I’m sorry. It’s my headphone. It’s not working. It’s not useful when I have to hear myself anyway. All right. So thank you very much, Olivier and Pari, for having organized this and for having been the driving force of this session that actually builds upon what we have already done last year in our first joint venture that was already quite successful. And I think that what’s already emerged, I always say that it’s good to build upon the sessions and building blocks and reports that we have already elaborated so that we move forward, right? And actually something that already emerged as sort of consensus last year in Riyadh are two main points. First is that we have already discussed for pretty much 20 years, at least here at IGF, internet governance and internet regulation. And so we can start to distill some of those teachings and lessons into what we could apply to regulate the evolution of AI and AI governance. And second, and to quote what Vint, the expression Vint used last year, the Internet and AI are two different beasts. So we are speaking about two things that are two digital phenomenon, but they are quite different. And the Internet, as Pari was reminding us very eloquently, has been built on an open, decentralized, transparent, interoperable architecture that made the success of the Internet over the past 70 years, 50 years, 50 years at least since Vint penned it in 1974. And yeah, but the question here is how we reconcile this with a highly centralized AI architecture. And I think that here there is a very important point we have been working on, on net neutrality and Internet openness debate over the past year, that is the concept of Internet generativity that we have enshrined in the documents, the report we have elaborated here over the past years, which is the capacity of the Internet to evolve thanks to the unfiltered contributions of the users, is the consequence of the fundamental core Internet values. Openness, transparency is to create a level playing field, a capacity to innovate, to share and use application services content and to make the Internet evolve according to how the users want to do. So users, not only users, passive users are prosumers. They create the Internet. Now, this is in fundamental tension with an AI that is frequently proprietary, non-interoperable, very opaque, both in the data sets that are used for training, that usually are the result of massive scraping of both personal data and copyrighted content in very peculiar ways that might be considered illegal in most countries with data protection or copyright legislation. And then the training and the output of it is very much opaque for the user. And very few companies can do this and supply this. So there is an enormous concentration phenomenon ongoing, which is quite the opposite of what the original internet philosophy was about. Now, to discuss this point, we have a series of fantastic speakers today. I think that, as I was mentioning before, as we are celebrating 51 years of the paper by Vint and Bob Kahn on the internet networking protocol, and a protocol for internet working networks, right, if I’m not mistaken. I think the first person that should go ahead should be Vint. So Pari, please, the floor is yours to present Vint.
Pari Esfandiari: Thank you very much. We have two actually overarching questions. And we would like our speakers to focus on those two overarching questions. I would read it for you. How can the internet’s foundational principles of openness and decentralization guide transparent and accountable AI governance, particularly as generative AI becomes a main gateway to content? And the second question, how can fundamental network neutrality principles such as generativity and competition on a level playing field apply to AI infrastructure, AI models, and content creation? So Vint, drawing on your unique experiences in both funding architecture of the internet and your work with the private sector, we are curious to hear your comments on these questions. Over to you.
Vint Cerf: Well, thank you so much for this opportunity. I want to remind everyone that I am not an expert on artificial intelligence. I barely manage my own intelligence, let alone artificial. But I work with a company that has invested very heavily in AI and in AI-based services. So I can reflect a little bit of that in trying to respond to these very important questions. The first thing that I would observe is that the Internet was intended to be accessible to everyone. And I think the AI efforts are reflective of that as well. The large language models, well, let me distinguish between large language models and machine learning tools for just a moment. All of you are well aware that AI has been an object of study since the 1960s. It’s gone through several iterations of phases, the most recent of which is machine learning, reinforcement learning, and then large language models. The reinforcement learning mechanisms have given us things like programs that can beat the best players of Go, programs that can tell you how proteins fold up, and that tells us something about their functionality. And more recently, there’s something at Google called Alpha Evolve, which is an artificial intelligence system that will invent other software to solve problems for you. The large language models that we interact with embody huge amounts of content, but they are specialized when they interact with the users. You use the term prompting to elicit output from these large language models. And the point I want to make here is that every time someone interacts with one of those, they are specializing it to their interests and their needs. So in a sense, we have a very distributed ability to adapt a particular large language model to a particular problem or to respond to a particular question. And that’s important, the fact that we are able to personalize. Our interactions with these sources of information is a very important element of useful access. The question about interoperability of the various machine learning systems is partly answered by the agent model idea. That is to say, the large language models are becoming mechanisms by which we can elicit not only responses, but also actions to be taken. So the so-called agentic generative AI is upon us. And consonant with that are two other standards that are being developed. One is called A2A, or agent-to-agent interaction, and the second is called MCP, which is a model context protocol to give these artificial intelligence agents a concept of the world in which they’re actually operating. The reason these are so important, and they create interoperability among various agentic systems, is that it’s very important for precision. It’s important that the agents, when they interact with us, and when they interact with each other, to have a well-defined context in which that interaction takes place. And we need clarity, and we need confidence that the semantics are matched between the two agents. If anyone has ever played that parlor game called telephone, where you whisper something in someone’s ear, and then they whisper in the next person’s ear, and you go down the line, and whatever comes out on the other end is almost never what started out at the beginning. We don’t want chains of agents to get confused, and so the A2A and MCP are mechanisms. to try to make that work a lot better. So I think this is a very important notion for us to ingest into the work of the core internet values, except they will have to become core AI values, which is clarity in interaction among the various agents, of course, among other things. Last point I would make is that as you interact with large language models, the so-called prompting exchanges, one of the biggest questions that we always have is how accurate is the output that we get from these things? We all know about hallucination and the generation of counterfactual output coming from agents. It’s very important that provenance of the information that is used by the agents or by the large language models and references be available for our own critical thinking and critical evaluation of what we get back. And so once again, that’s a kind of core internet value. How do I evaluate or how can I evaluate the output of these systems to satisfy myself that the content and the response is accurate? So those are just a few ideas that I think should inform the work of these dynamic coalitions as we project ourselves into this online AI environment. But I’ll stop there because I’m sure other people have many more important things to say in response to these questions.
Pari Esfandiari: Thank you very much, Brent, for that very informative discussion. And with that, I would go to Sandrine. Sandrine, based on your experience shaping digital strategies within government, how would you see this? Thank you.
Sandrine ELMI HERSI: Thank you. And let me first start to say that it’s a real pleasure to… Thank you all for joining this session today and to discuss this important topic with partners from the Net Neutrality and Core Internet Values Coalition. And before we ask how to apply openness and transparency to AI governance, I would like to insist on the why and why this application has become essential. So as it was already covered by Vint, LLMs, notably Generative AI tools, are becoming a new default point of entry to online content and services for users. Since our conversation at the last IGF at Riyadh, we’ve been seeing this trend accelerating through the development of the use of individual chatbots, but also the establishment of response engines integrating into mainstream search tools. Generative AI is also increasingly embedded directly in end-users’ devices. And we are also seeing a shift from early-generation LLMs to new RAG, so Retrieval Augmented Generation systems that are now included in AI tools and that can directly draw from the web. And looking ahead, agentic models could also centralize a wide range of users’ actions into a single AI interface. So the question is really, will tomorrow’s Internet still be open, decentralized and user-driven if most of our online action is mediated by a handful of AI tools? So now, regarding the how, ARCEP, the French regulatory authority for electronic communications, is currently conducting technical hearings and file testing with a team of data scientists to explore this very question. All through our report is currently in development, we can already identify three main areas for action to apply internet core values to AI governance. The first area is accelerating on AI transparency, understanding generative AI models, what data they use, how they process information, and what limits they have is a prerequisite for trust. There is some progress, more and more players are now engaging with researchers and through sectorial initiatives such as standards and code of conduct, but many models remain black boxes. We need greater openness, especially to research, to the research community, to improve auditability and explainability, but also the efficiency of models. The second area is preserving the notion of intelligence at the age of networks, which is the original spirit of the internet, intelligence distributed among users and applications, not centralized in platforms or infrastructure. We must notably ensure that users remain able to choose among diverse services and sources. This may require working on the technical and economic conditions that shape AI outputs to guarantee a certain level of neutrality, plurality of views, and openness to a diverse range of content creators and innovators. Last but not least, regarding the principle of non-discrimination that is also at the center part of net neutrality. So net neutrality, non-discrimination principle, was originally applied to prevent… Internet Services Providers from privileging their own services or partners in vertical markets. But today’s ISPs are not the only digital digital gatekeepers that can narrow the perspective and freedom of choice of end-users. So at ARCEP, we are now working on assessing to what extent this principle of non-discrimination and openness can be extended to AI infrastructure, AI models, but also how AI curates and presents contents. On this, very shortly, we notably investigate two questions. The first one is how to preserve the openness of AI markets, notably through ensuring that plurality of economic players have access to key inputs necessary for LLM developments, including data, computing resources, but also energy. And the second question we are also diving into is ensuring that we keep a diversity of content on the internet, knowing that when they use AI chatbots and response engine, end-users only have access to one answer instead of hundreds of web pages. So we must ensure that generative AI is not just simply amplifying already dominant sources, but is open to smaller and independent content creators and innovators. That might mean in the future working on defining sector-wide frameworks or interconnection standards on fair contractual conditions, as it was done for IP interconnection. And to end, the goal is not, of course, to block innovation, but on the contrary, to make sure that innovation and AI are compatible with preserving internet as a common good.
Luca Belli: Thank you very much Sandrine for these very excellent thoughts and I think it’s very good to see how you are illustrating that what has been done in terms of internet openness regulation or net neutrality debates over the past 15 years is precisely trying to enshrine into law the original philosophy of openness transparency and decentralization and priority of the internet and make sure that when what we can call gatekeepers or point of controls emerge they behave correctly and if necessary a law protects the rights of the users and the regulator oversees the laws to make sure that the obligations are implemented. Now what is very difficult now is to understand who are the new gatekeepers and how to implement the law that maybe still does not even exist in these terms. So I would like to now give the floor to Renata Mielli who is the coordinator of the CGI in this moment and CGI has been also leading the debate on net neutrality internet openness and now also many AI issues in Brazil. So Renata the floor is yours.
Renata Mielli: Thank you Luca and thank you all for inviting me for this session especially because I believe we are establishing a continuity and we are deepening the debate we started in Riad when we discuss AI from a perspective of sovereignty and the empowerment of the global south last year and how to reduce the existing asymmetries in this field and now we are talking about how to bridge the internet governance and principles to AI principles and governance and to contribute to this session I choose to look at the world We have considered the work we have done in CGI.br on principles for the Internet and try to reflect on what makes sense and what does not make sense when we’re thinking about AI in a perspective of establishing a set of principles for development, implementation and use of AI technologies taking into account what Luca just said about the differences, the high economic concentration, the opacity of the systems and taking into account also what Vint said, there are two different principles. In this sense, I would like to start by looking at what is not covered when we are talking about AI in these ten principles. The first thing I see and a lot of people are mentioning a lot is transparency and explainability because these two principles are very essential when we talk about AI because it involves a series of procedures that are not in the same way when we are dealing with the Internet. Internet is open, Internet is decentralized, all the protocols are built in a very collaborative way, but this is not the case of AI. So AI system governance and deployment and development needs to ensure high levels of transparency especially for the social impact assessment of this type of technology as well as for the creation of compliance process that ensures other principles like accountability, fairness and responsibility. We are discussing a series of specific principles for AI that were not necessarily conceived in the context of internet governance. In terms of CJIS Decalogue, I’d like to point out which ones can be, in some way, interoperable with AI principles. In this case, I think, of course, freedom, human rights, democratic and collaborative governance, universality in terms of access and the benefits of AI for all, diversity when talking about language, culture, the necessity of inclusion of all kinds of expressions, also standardization and interoperability between the various models and, of course, we need legal and regulatory environment for these systems. We can think that the perspective used for the internet governance is applicable to AI principles in context. From another perspective, principles like security need to be addressed with two other principles, safe and trustworthy, and ethical, I point another one, so they can be answered with discussion about impacts on rights like privacy and data protection. Finally, an important part of this exercise of evaluating internet governance principles and their possible alignment with AI governance principles is to identify what was conceived for the internet that is not applicable in the AI context. In this aspect, only to mention because I don’t have more time, I point to the principles of net neutrality because the proposed here is to present have observed net neutrality in relation to telecommunications infrastructure and this is not applicable to AI. And there is neutrality in the technology itself. AI is not neutral. And I think in inputability also is another principle that is not easily transferred from the internet to AI because here we have to understand the responsibility in the AI chain. So these are some thoughts I have to share in the beginning of this panel. Thank you very much.
Luca Belli: Thank you very much, Renata. And actually you also bring into the picture something extremely relevant I think for which the IGF is also an appropriate forum, being a UN forum. The fact that we have been debating this for 20 years. There are a lot of debates also going on in the Global South about this since at least 20 years. But what we see in terms of mainstream debates and policymaking and even construction of AI infrastructure, especially cloud infrastructure is an enormous predominance of what we could call the Global North. So it’s very interesting to start to bring into the debate the Global South voices. We’ve started with Brazil. Now we are continuing with Ms. Adia Elminaoui that is here representing the African continent, which is an enormous responsibility. So please, Adia, the floor is yours.
Hadia Elminiawi: Thank you. Thank you so much. And I’m happy to be part of this very important discussion. So let me first start to highlight the similarities between AI and the Internet that make the Internet’s core values well suited as a foundation for AI governance. AI can be considered one of those general-purpose technologies impacting economic growth, maybe quicker than any other general-purpose technology that has emerged in the past, such as steam engines, electrifications, computers. AI is driving revolutionally changes in all aspects of life, including healthcare, education, agriculture, finance, services, policies, and governance. By definition, AI isn’t just one technology, but it’s a constellation of them, including machine learning, natural language processing, and robotics that all work together. Similarly, the Internet stands as another powerful general-purpose technology that has fundamentally changed the way we live, work, and interact, enabling new ways of communication, education, services, provision, and conducting businesses. The Internet infrastructure is foundational to artificial intelligence, enabling cloud services, including managing on-site data centers and real-time applications. In addition, many of the services and applications that are being delivered over the Internet infrastructure are using AI to deliver better experiences, services, and products to users. So when it comes to Africa, the capabilities of African countries regarding AI vary significantly across the continent due to differences in the availability of resources, infrastructure, including reliable and efficient electricity, broadband connectivity, data infrastructure like data centers and cloud services, accesses to sets of quality data, AI-related education and skills, research and innovation, and investment. So last year in July 2024, the African Union Executive Council endorsed the African Union Continental AI Strategy. The Continental AI Strategy is considered pivotal to achieving the aspirations of the Sustainable Development Goals. And likewise, the internet plays a critical role in achieving the Sustainable Development Goals. No poverty, good health and well-being, quality of education, digital industry, innovation and infrastructure. Other relevant regulatory approaches around the globe include EU’s AI Act adopted in 2024, the Executive Order for Removing Barriers to American Leadership in AI in January 2025, sectoral oversight in the US, the UK Framework for AI Regulation, the 2023 G7’s Guiding Principles and Code of Conduct, and China also has developed some rules, Egypt also has its second edition of the National Artificial Intelligence Strategy in 2025. So in all those strategies, we see some of the core principles that have shaped the internet, such as openness, interoperability and neutrality, guiding various AI governance strategies. So the question now becomes, how do we translate those agreed principles and frameworks into actions? And in some cases, what do those principles in practical terms mean or look like? So let’s look at openness and transparency. What does this mean?
Luca Belli: Hadia, may I ask you to wrap up in 30 seconds?
Hadia Elminiawi: Yes, sure. So that would be very quick. I’m almost done. So open access to research and requiring AI model, maybe. It means open access to research and requiring AI models to include components for full understanding and auditing. But what does ensuring transparent algorithms in practical terms mean? Is it realistic or even desirable to expect that all AI models be made fully open source? Given the amount of capital investment in these models, requiring complete openness could discourage investment in AI models, destroying a lot of economic value and hindering innovation. At the same time, transparency and openness raise some important ethical and security concerns. Is it truly responsible or logical to allow unrestricted access to tools that could be used to build weapons or plan harmful disruptive actions? We may need layered safeguards. AI algorithms on top of other AI algorithms to ensure responsible and secure use. So what alternative solutions can we consider? One possibility could be requiring all AI developers to implement robust safety guardrails and have these guardrails open source rather than the models themselves. In addition, AI developers could be required to publish the safety guardrails that they have put in place. I guess this is an open discussion. And with that, I would like to wrap up and thank you.
Pari Esfandiari: Thank you very much, Hadia. And on that, I think that I want to thank all the panelists for their insightful contribution. And now I want to invite our invited community members to comment on what they have heard. So you are also welcome to share your own views on the broader issues we have touched upon. And on that, I would start with Roxana. Roxana Pardue, you have five minutes. Please start.
Roxana Radu: Thank you very much. I’m sorry for not being able to join you in person. I would just like to start. Let me start by saying that there is a flourishing discussion now around ethics and principles in AI governance. In fact, it’s what we’ve seen developed over the last five or six years. It’s a plethora of ethical standards and guidelines and values to adhere to. But the key difference with internet governance is the level of maturity in these discussions and also the ability to integrate those values that are newly identified into technical policy and legal standards. What we’ve done in internet governance over the last 30 years is much more than identifying core values. We apply them, we’ve embedded them into core practices, and we are continuing to refine these practices day by day. I think there are four key areas that require attention at this point in time where we can bridge the internet governance debates and the AI governance discussions. First is the question of market concentration. Look, I was already alluding to gatekeepers, how do we define them in this new space? Highly concentrated ownership of the technology, of the infrastructure, and so on and so forth. Second is the diversity and the equity in participation in engaging different stakeholders, but also stakeholders from parts of the world that are not equally represented. Thirdly, there is the hard-learned lesson of personal data collection, use, and misuse. We have more than 40 years of experience with that in the internet governance space, and we’ve placed emphasis on data minimization, to not collect more than what you need. This lesson does not seem to apply to AI, in fact it’s the opposite. Collect data even if you are not sure about its purpose currently, machines might figure out a way to… to use that data in the future, is the opposite of what we’ve been practicing in recent years in internet governance. And fourthly, and very much linked to these previous points, there’s a timely discussion now around how to integrate some of these core values into technical standards. With AI, there seems to be a preference for unilateral standards, the giants developing their own standards, sharing them through APIs, versus globally negotiated standards, where a broader community can contribute. And those voluntary standards could then be adopted by companies and by participants in those processes more broadly. I think we need to zoom in on some of these ways of bringing those core values into practices. And it’s very opportune to do that now at the IGF. Thank you.
Luca Belli: Thank you very much, Roxana. And I think that there are some interesting points that are emerging here. Also something that I want to very briefly comment on, because it was raised before, is that we are discussing here how core internet values can apply to AI. And I think it’s interesting to do this in joint venture with the Coalition on Net Neutrality, because net neutrality is actually the implementation of core internet values into law. And as any lawyer that has studied Montesquieu would tell you, what counts in the law is the spirit of the law, right? I remember 10 years ago writing an article on the spirit of the net, where I was mentioning precisely net neutrality was the enshrining into law, the spirit of the net, the core internet values, right? And so we now have to understand a way to translate this into an applicable way to AI. And I think that is the huge challenge we have here today. And I’m pretty sure that our friend Bill Drake knows how to solve this challenge. Believe the floor is yours.
William Drake: Obviously I do not. Thank you. Okay well first of all I congratulate the organizers of this session on putting together an interesting concept. I mean trying to figure out how you map internet properties and values into the AI space I think is definitely a worthwhile activity. As Roxana noted it kind of builds on all the discussions at international level in recent years about ethics whether in UNESCO and other kinds of places and I think that you know it’s it’s worth carrying this forward but I would start by noting just there’s a few constraining factors. Three in particular. First conceptually let’s bear in mind again going back to what Vint said we’re talking different beasts you know we’re not talking here about a relatively bounded set of network operators and so on we’re talking about a vast and diverse range of AI processes and services in an unlimited range of application areas from medicine to environment and beyond so which internet properties will apply generally or in specific context simply can’t be assumed. We need to do close investigation and mapping and I think there’s a great project there for somebody who wants to develop that matrix. I look forward to reading whoever does that first. There are reasons to wonder whether some of these things really do apply clearly. Renata suggested that net neutrality for example might not be so directly applicable. There’s a lot of other challenges I think there intellectually. Secondly of course is the material interests of the private actors involved. Luca referred to the concentration issues. It’s nice to think about values but I wouldn’t expect all the US and Chinese companies that are involved in this space to join an AI engineering task force and hum their support for voluntary international standards. To the contrary they’ve kind of demonstrated that they’ll do pretty much anything to promote their interests at this phase including sponsoring military parades for dear leaders in Washington. and so on. So it’s unclear how much they would embrace any kind of externally originated constructs like neutrality, openness, transparency, etc. that don’t really fit well into their immediate profitability profile and how well these things would apply to very large online platforms and search engines, etc. Again, real challenges there. And lastly, of course, the material interests of states. Net neutrality, of course, is verboten in the United States now. Applying it to AI, of course, would be too. Generally speaking, multilateral regulatory interventions are impossible to contemplate in the Trump era, at least for those of us who are in North America. And I’m not sure what China would sign on to in that context. So in principle, you would like to think, though, that transparency and openness with regard to governance processes, especially international governance processes, could be pursued. And there, you know, I would just like to flag a couple of quick points before I run out of time. Lessons from Internet governance, I think, that are relevant. One, first, we have to be real clear about where there’s an actual demand for international governance and regimes and the application of these kinds of values and so on. We simply can’t just assume, because the technology is there and the issues are there, that there’s a functional demand. You know, often people point to things and say, oh, there’s some new phenomena. We must have governance arrangements. But very often the demand for governance arrangements is not equally distributed across actors, and those highfalutin aspirations don’t get fulfilled. So, you know, I mean, we used to talk about safety, right? There was a lot of international discussion around safety. Now suddenly safety is out the window, and we’re all talking about, well, we want to promote innovation and investment. So, you know, it’s easy to say that we have this kind of this demand to do all these new wonderful normative things, but in reality, you know, when push comes to shove, we have to look at where there’s a real functional demand. Where do you actually need international governance, interoperability or harmonization of rules. In the telecom space, if you look historically, right, radio frequency spectrum, we had to have non-interference. Telecom networks had to be interconnected and have standards to allow the network to pass traffic between them. So there was a strong incentive for states to get on board and do something even if they had different visions of how to do that and they could fight over it. What do those aspects of the AI process that absolutely require some kind of coordination or harmonization, it’s not entirely clear and I think we can’t just assume that. Other last point, I don’t want to run out of time, is just to say, going as somebody who was around 20 years ago and remembers all the fights over internet governance and what is internet governance and so on, we are in a liminal moment like we were 20 years ago where people are not clear what is the phenomena, how do we define it, what does governance mean in this context, etc. This requires a great deal more thinking about when you’re applying it to the specificities of the AI space. I think I hear a lot of these discussions in the UN where people seem to be just grafting constructs from other international policy environments onto AI and saying, well, we’ll just apply the same rules that apply elsewhere. And this is like saying that we’ll apply the rules from the telegraph to the telephone and from the telephone to television and, you know, every new technology, we’re going to look at it through the lens of previous technologies, but often that doesn’t work so well. And my last point, and then I’ll stop, multilateral action I’d be very careful in thinking about. I noticed that the G77 in China in a reaction to the co-facilitators’ stuff on the AI is saying that they want binding international commitments coming out of the UN process, that they will not accept purely informal agreements coming out of the UN process. I look at what’s going on in the AI space, I’m thinking, seriously, what kind of binding international agreements are we going to begin negotiating how in the United Nations? in the near term. And if you set that up at the front end is the object that you’re trying to drive towards, you can just see how difficult all this is going to become very quickly. So I probably went over five minutes, so I’ll stop. Thank you.
Pari Esfandiari: Thank you very much, Bill. And for the sake of time, I’m not going to reflect. You packed an awful lot of information on that that section, but we don’t have enough time. So therefore, I directly go to Shuyan Wu. Shuyan Wu, floor is yours.
Shuyan Wu: Okay, thank you. Hello, everyone. It’s a great pleasure to hear, to attend this important discussion. I am from China Mobile, one of the world’s largest telecom operator. So I’d like to share the practices and experiences from China Mobile to bridge internet and AI governance. In the age of internet, we continue to promote the development of the internet ecosystem towards fairness, transparency, and inclusiveness. This commitment is reflected in our efforts across infrastructure development, user rights protection, and bridging the digital divide. Firstly, in terms of infrastructure development, we strive to ensure equal access and tool and inclusive use of internet services. China Mobile’s mobile and broadband networks now cover all villages across the country. We’ve also built the world’s largest and most extensive 5G network. Second, when it comes to protecting users’ rights and interests, we work actively to create a transparent and trustworthy online environment. We provide clear user-friendly service mechanisms and have introduced quality management tools to ensure users’ right to information and independent decision-making. For specific groups such as elderly and minors, We focus on fraud prevention education and offer customized services to build a safer and greener digital space. Third, to bridge the digital divide and support inclusive growth, we’ve implemented targeted solutions. For elder users, we offer dedicated discounts and have tailored our smart services to their needs. For minors in rural areas, our 5G smart education cloud network services are helping to reduce the gap in education resources between urban and rural communities. As the transition from the internet era to the age of AI, China Mobile is actively adapting its experience and capabilities to the evolving needs of AI governance. We’re striving to build a digital ecosystem featuring universal access, decentralization, transparency, and inclusiveness. We are investing in AI infrastructure to promote resource sharing and encourage decentralized innovation, backed by our strong computing power, data resources, and product solutions such as large language models and AI development platforms. At the same time, we continually leverage AI capabilities to build a transparent and trustworthy digital environment, effectively safeguarding user rights. For instance, China Mobile applies AI-powered information detection technologies in scenarios like video calls and financial services to help users identify false or harmful content. Moreover, we are committed to ensuring that the benefits of AI are shared by all. For minors, we launched personalized education and eminence education scenario interaction solutions. For the elderly, we offer AI-powered entertainment, health monitoring, and safety services and for rural areas our smart village doctor system delivers quality health care to remote communities. That’s all for my sharing. Thank you.
Olivier Crepin-Leblond: As everyone points over to me thank you very much and now we’re going to open the floor for your input and for your feedback on what we’ve had so far. I’m the remote participation or online participation moderator as well and there’s been a really interesting debate going on online. I’m not sure how many of you have been following this. I was going to ask whether we could have the two main participants that were speaking back and forth online Alejandra Pisanti and Vint Cerf as well because Vint of course is always active both online and with us. So and then we’ll have after those two then we’ll start with the queue also in the room. Yeah all right let’s get going Alejandro you have the floor. Thank you.
Alejandro Pisanty: Good morning. Can you hear me well? Yes very well. Thank you. Thank you. I was making these points also in previous discussions of the core internet virus dynamic coalition. If you are trying to look at translating what the experience of governance from the internet to AI to artificial intelligence I think there’s a few points that are valuable to take into account and many of them have been made already so I’m trying to just group them. First is you have to define pretty well what you want to govern. What branch of the enormous world of artificial intelligence you actually want to apply some governance. Otherwise you’ll have some more serious ill effects. Using AI for molecular modeling, protein folding and so forth that kind of problem or using it as a back office system for detecting fraud in credit cards and so forth are very different beasts in turn. So it’s very important not to regulate one of them let’s say regulates with such generality that rules from one of them will impede progress in other ones where they are absolutely not necessary. Second, what I think is very important, and we learned this from internet governance, from 30 years of internet governance, is make sure you are governing the right thing in the following sense. What does AI, as the internet in turn, bring new to things that we already know? What rules do we already have that we can just apply or modify for taking into account AI? For example, we have purchasing rules, especially in governments, where you know the constraints that you have on systems that you can buy for government, like they cannot be discriminatory, they cannot be harmful, and so forth. So you can apply those rules instead of creating a whole new world. It’s like medical devices, for example, you already have so many rules for automated medical devices, you can extend those to artificial intelligence, the harms and the consequences of the harms. So these will be different, they will be amplified, there’s probability, there’s uncertainty, but we know how to deal with that and we just need to change the scale and a better understanding of these factors. Next, what do you expect to obtain from governance? Do you want more competition? Do you want a reduction of discrimination and bias? Do you want more respect for intellectual property? Do you want more access to global resources for the global south, and so forth? Because this will determine the institutional and organizational design. And next and most important, and this is something that a NetMundial plus 10 meeting, for example, does with other good that it has, is how do you actually bring these different stakeholders? Who are the stakeholders and how do you bring them to the table? If you want to regulate large language models provided over the internet for chatbots, like are the dominant aspect of public discussion these days. Why would they come to the table? Why would OpenAI, Google, Meta, et cetera? not to speak about Mistral and certainly the providers in China and other countries which are operating under completely different sets of rules, why would they come together and agree to limit themselves in some way? Also to sit at the table with people who are their users or their clients, potentially their competitors if something arises from their innovation. And especially, how do you bring them together to put some money into the operation of the system? To agree to have a structure, to agree to have their hands tied in some extent. What has happened, for example, in internal governance is very different things for, let’s say, the domain name system and fighting phishing and scams. For the domain name system, you have companies fearing that strong rules for competition would come from the US government and they agreed finally to come together with civil society and the technical community, which is also a key point. The experts have always to be at the table. As the ICANN paper has stated very recently for intergovernance, the technical community is not one more participant. It’s a pillar and you need to know what the limitations and the capabilities of that technology are. I’ll stop there. Thank you.
Olivier Crepin-Leblond: Thank you, Alejandro. OK, next, Vint Cerf.
Vint Cerf: First, I have to unmute. So thank you so much, Alex. I always enjoy your line of reasoning. Let me suggest a couple of small points. The first one is that with regard to regulation of AI based applications, I think a focus of attention should be on risk to the users of those technologies and of course, potential liability for the provider of those applications. So a high risk application such as medical diagnosis or recommended medical treatment or maybe financial advice ought to have a high level of safety associated with it, which suggests that if there is regulation. the provider of the service has to show due diligence that they have taken steps that are widely agreed to reduce risk for the user. So risk is probably a very important metric here. And concurrently, liability will be a very important metric for action by the providers of AI-based services. I think that another thing which is of significance is the provenance of the materials used to train these large language models, for example, and explainability chain of reasoning, chain of thought, those sorts of things to help us understand the output that comes from interacting with these large language models. And finally, I mentioned this earlier, but let me reiterate that the agent-to-agent protocol and the model context protocols are there, I think, partly to make things work better, more reliably, but they might also be important for limiting liability. In other words, there’s a motivation for implementing these things with great care and designing them with great care so that it’s clear, for example, in a multi-agent interaction, which agents might be responsible for what outcomes. Again, something that relates to liability for parties who are offering these products and services. So I’ll stop there. I hope that others who are participating in this will be able to elaborate on some of these ideas.
Olivier Crepin-Leblond: Thank you, Vin. Just one point. Earlier in the chat, you mentioned, I’m seeing here, indelible ways to identify sources of content used to train AI models. Could you explain a bit?
Vint Cerf: Yes, I was trying to refer to provenance here. The thing that people worry about is that the material used to train the model may be of uncertain origin. And if someone says, well, how can I rely on this model? How do I know what it was trained on? Here, I think it should be possible to identify what the sources were in a way that is incontrovertible. Digitally signed documents or materials that whose provenance can be established is important, because then we can go back and say to the parties providing those things or ask them questions about the verifiability of the material that’s in that training data.
Olivier Crepin-Leblond: OK, thanks very much for this and apologies for the for the wait. But please over to the gentleman standing at the microphone and please introduce yourself in your intervention.
Audience: Thank you. Yes. Hi. Thank you for the excellent panel. I’m Dominique Hazel Monsieur. I work for WCC where among other things, I oversee our work around AI and its impact on the web. So this is a place where a lot of web standards are being developed. I guess I wanted to make two remarks, one on scope and one maybe on incentives for governance. One of the topics that was brought up in terms of scope, we are at the IGF and it’s been mentioned a number of times. AI is extremely broad. One, I think, useful way to segment the problem is to look at the intersection of AI and Internet. And there are a number of those like AI has been fed from a lot of web content. A lot of web content is now being produced through AI. AI is starting, as Vint was describing, to be used as agents on the web and on the Internet. So looking exactly at these intersections and what AI changes to the existing. and they are all critical components of their strategy about both building their tools and distributing the tools. And they can only make that remain true if they don’t impoverish the ecosystem to a point where there is no more content they can feed on or no more services that could accept to reuse or integrate with the system. So at the end of the day, I think it’s really a matter of in particular in this emerging agents architecture that Vint was describing that we understand what are the expectations for these agents learned from rules that already exist. For instance, in the web space, we have a number of very clear expectations as to what you ought to do if you’re a browser, literally a user agent. And understanding how they apply to AI-based agents I think is going to be hopefully very illuminating about what kind of governance we should put in place around that.
Olivier Crepin-Leblond: Thank you very much for your intervention. And the next person in line, please introduce yourself.
Audience: Yeah. Hi, sorry. My name is Andrew Campling. In this context, I’m an internet standards and internet governance enthusiast. To build on Bill’s comments, I probably wouldn’t start from here either. But here we are, and we’re probably too late to be somewhat pessimistic. But if I was going to look anywhere to start, it wouldn’t be the internet. I’d probably look closely at lessons from social media specifically, where we’ve got, in my opinion, a small number of highly dominant players who are disinterested in collaborative multi-stakeholder initiatives, unless they’re commercially worthwhile to them. If we look to the internet model, and we try to collaborate, build a multi-stakeholder governance model, I don’t think there’s a commercial imperative for the players to do that. It’s far too easy to game the system and take a long time, and by the time something’s agreed, it will be irrelevant. So if I was to start anywhere, I’d look closely at duty of care as a key requirement, and also explore why we wouldn’t apply the precautionary principle widely, and use those as two foundational building blocks. I wouldn’t start with internet governance. So apologies for the pessimism, but I think we have to be pragmatic and realistic where we are. Thank you.
Olivier Crepin-Leblond: I should say, this is quite a British intervention. Okay, thank you so much. Pass it over to Luca, or should we go for the conclusions, because there’s only about six minutes. Yeah, I think we can go.
Luca Belli: We have six minutes. Do we have any other comments or questions in the room? I don’t see any hands. We have exhausted the comments from the online participants. I think we can go for a round of very quick conclusions, like very prehistoric tweets of 240 characters. We don’t have time, because we’ve got to go now to Jek. Well, Jekshan is the person. Oh, Jekshan. Yeah, sorry. Sorry. We have already Jekshan that has a chat GPT, will distill all the knowledge in five minutes result.
Yik Chan Ching: Okay and thank you very much for giving me the five minutes to make some comment. Actually I’m from PNAI, you know the police level of artificial intelligence which is also the intersectional process of IGF. So it’s very interesting to have the joint section between the PNAI and the DCs. Yeah I found that discussion is really fascinating and so my observation is that there’s a two observation also based on the PNAI’s past three years research on the AI governance. For example we did a two report on liability and interoperability or these big issues and environmental protections. So I think there’s a two issue I would like to make some comments. The first one is about the institutional secting and because Bill asked how can we you know collaborate at the global level and what are the initiative or interest. I think there’s a first of all we know that there’s a UN you know process going on in terms of the scientific panels and also global dialogues. So we probably give some you know opportunities and a little bit trust to them and hold on to see what are the outcome from the UN level. And secondly I think for my experience what really make a difference between the AI governance and internet governance or social media governance is that we learn from our past experience especially you know social media’s experience. So we have such a vibrant discussion you know intervention, earlier intervention or precautional principle you know as we British said and from different stakeholders from civil society, from academia and you know from the industry. So I think in that sense you know we’re more much more precaution than the social media. Internet Errors. So which probably will make a difference. And certainly, the second one is in terms of which area we should look at. From my experience and also the PNAS experience, I agree with Vint. First of all, it’s risk. Risk is very important, you know. And secondly, the safety issues. And of course, liability, because liability is a mechanism we hold the AI developers and the deployments accountable. So that’s very important, I think. The third one, of course, is interoperability. So when we talk about interoperability, it’s not only about a principle, ethics, norms, but also standard, you know. So the standard will play a significant role in regulator AI. And I’m very glad we see a lot of progress in terms of AI standard making. For example, at the EU level, there’s a lot of standard. So they’re going to have a kind of, you know, announcement of the EU standard in terms of AI act. But there’s also standard, huge progress of standard making in China in terms of safety issue or the Africa issue. So I think the AI standard will be one of the very crucial areas for us to regulate AI in the future. I think I’ll stop here. Thank you very much.
Olivier Crepin-Leblond: Thank you very much, Yig-Chan. And there’s two minutes left, I guess, to just ask any of our co-moderators on their reflections. I was going to say one tweet from each one of our participants, but I don’t know if we can do it in the two minutes. Should we try? One tweet? Yeah, why not? Quick tweet. Okay, let’s start with the table then, with the person the furthest to my right, which is your left. Bill Drake?
Luca Belli: A message of hope in 20 seconds.
William Drake: A message of hope in 20 seconds. Wow.
Luca Belli: Or of disgrace, as you prefer.
William Drake: I was going to say abandon all hope. All right. Well, I just echo again the point about being very clear about exactly. What demand is there for what kind of governance over what kinds of processes? Too much of the discussion around these issues is just too generic and high level to be very meaningful when we get down to the real nitty-gritty of what’s going on in different domains of AI development and application, and so we need a dose of realism there. But I like the idea of the mapping effort that you’re trying to do, and I look forward to seeing you guys develop more.
Olivier Crepin-Leblond: Thank you, Bill. Next, Weixin.
Shuyan Wu: Okay, thank you. It’s my first time to attend this kind of discussion, and it’s very important to share my opinions and discuss with all of you. I hope I have another chance to continue to exchange our ideas with all of you. Thank you. Thank you.
Hadia Elminiawi: Regional and international strategies and cooperations should not be seen as conflicting with national sovereignties. National and international strategies, cooperations and collaboration should go in parallel and hand-in-hand. They should support and strengthen the goal of one another. They need to have aligned objectives and implemented simultaneously.
Olivier Crepin-Leblond: Thank you, Hadia. Sandrine.
Sandrine ELMI HERSI: Yes, so we can no longer think AI governance and internet governance as separate entities. As we noted today, the strong interlinks between LLMs and internet content and services, thus applying internet core principles to AI, is not a whim or accessory. It is the only way to preserve the openness and richness of the internet we spend years to build, and we can and must act now to establish a multi-stakeholder approach with that in mind.
Olivier Crepin-Leblond: Thank you. Renata.
Renata Mielli: Just three words. How to transform these principles into technical standards, we talked about this, and I want to say we need oversight, agency and regulation, and we need to remember that governance and regulation are two different things, and governance needs to be multistakeholder, and we need national regulations for AI systems.
Olivier Crepin-Leblond: Thank you, Roxana.
Roxana Radu: I’ll just say that we need to walk the talk, so now that we’ve done this initial brainstorming session, I look forward to seeing what we can come up together in terms of bridging this gap between what we’ve learned in internet governance, and where we’re starting in AI discussions. This is not to say that everything applies, but we’ve learned a lot, and we shouldn’t reinvent the wheel.
Olivier Crepin-Leblond: Thank you. And finally, Vint.
Vint Cerf: I think my summary here is very simple. We just have to make sure that when we build these systems, we take safety in mind for all of the users. That’s going to take a concerted effort for all of us.
Olivier Crepin-Leblond: Thank you very much, and if anybody in the room is interested in continuing this discussion, which I hope you are after this session, then please come over to the stage and share your details with us. You can get onto the DC’s mailing lists and continue the discussion and participate in future such work. Thank you.
Pari Esfandiari
Speech speed
127 words per minute
Speech length
599 words
Speech time
282 seconds
Internet’s foundational principles of openness, decentralization, and transparency should guide AI governance as generative AI becomes a main gateway to content
Explanation
Pari argues that as generative AI increasingly serves as the primary access point for online content, the core values that made the internet successful – being global, interoperable, open, decentralized, end-to-end, robust and reliable – should be applied to govern AI systems. She emphasizes that these were deliberate design choices that made the internet a global common for innovation and human agency.
Evidence
References the internet’s core values: global, interoperable, open, decentralized, end-to-end, robust and reliable, and freedom from harm as deliberate design choices
Major discussion point
Bridging Internet Core Values and AI Governance
Topics
Infrastructure | Legal and regulatory
Fundamental network neutrality principles such as generativity and competition on a level playing field should apply to AI infrastructure, AI models, and content creation
Explanation
Pari presents this as one of the two overarching questions for the session, suggesting that the principles ensuring fair competition and innovation in internet infrastructure should be extended to AI systems. This includes ensuring that AI infrastructure and models maintain the same level playing field that network neutrality provides for internet services.
Evidence
Presented as one of two main questions guiding the session discussion
Major discussion point
Applying Network Neutrality Principles to AI
Topics
Infrastructure | Legal and regulatory
Luca Belli
Speech speed
159 words per minute
Speech length
1229 words
Speech time
463 seconds
Internet and AI are “two different beasts” – Internet built on open, decentralized architecture while AI is highly centralized and proprietary
Explanation
Luca emphasizes the fundamental architectural differences between the internet and AI systems. While the internet was built on open, decentralized, transparent, and interoperable principles that enabled its success over 50 years, AI operates through highly centralized, proprietary, and often opaque systems controlled by few companies.
Evidence
References Vint Cerf’s expression from previous year and contrasts internet’s 50+ year success with current AI centralization trends
Major discussion point
Bridging Internet Core Values and AI Governance
Topics
Infrastructure | Legal and regulatory
Hadia Elminiawi
Speech speed
122 words per minute
Speech length
721 words
Speech time
351 seconds
Core internet values like openness, interoperability, and neutrality are appearing in various AI governance strategies globally
Explanation
Hadia observes that the fundamental principles that shaped the internet are being incorporated into AI governance frameworks worldwide. She notes that various international strategies and regulatory approaches are adopting these core principles as foundational elements for AI governance.
Evidence
References EU’s AI Act (2024), US Executive Order for AI Leadership (January 2025), UK Framework for AI Regulation, G7’s Guiding Principles and Code of Conduct (2023), China’s AI rules, Egypt’s National AI Strategy (2025), and African Union Continental AI Strategy (July 2024)
Major discussion point
Bridging Internet Core Values and AI Governance
Topics
Legal and regulatory | Infrastructure
Complete openness of AI models may be unrealistic given capital investment and could discourage innovation and raise security concerns
Explanation
Hadia questions whether requiring full transparency and open-source access to AI models is practical or desirable. She argues that given the massive capital investments in AI development, complete openness could discourage investment and destroy economic value, while also raising ethical and security concerns about unrestricted access to potentially harmful tools.
Evidence
Points to the substantial capital investment in AI models and security risks of unrestricted access to tools that could be used for weapons or harmful actions
Major discussion point
AI Transparency and Explainability Challenges
Topics
Legal and regulatory | Cybersecurity
Agreed with
– Sandrine ELMI HERSI
– Renata Mielli
– Vint Cerf
Agreed on
Need for AI transparency and explainability to address opacity challenges
Disagreed with
– Sandrine ELMI HERSI
Disagreed on
Feasibility of complete AI transparency and openness
Alternative solutions like requiring open-source safety guardrails rather than full model transparency should be considered
Explanation
As an alternative to complete model transparency, Hadia suggests that AI developers could be required to implement and make public their safety guardrails and protective measures. This approach would provide transparency about safety measures without exposing the entire model architecture.
Evidence
Suggests requiring AI developers to implement robust safety guardrails and publish information about these safety measures
Major discussion point
AI Transparency and Explainability Challenges
Topics
Legal and regulatory | Cybersecurity
Regional and international AI strategies should align with and strengthen national sovereignties rather than conflict with them
Explanation
Hadia emphasizes that international cooperation and national sovereignty in AI governance should be complementary rather than competing approaches. She argues that national and international strategies should have aligned objectives and be implemented simultaneously to support each other’s goals.
Major discussion point
Global South Perspectives and Digital Divide
Topics
Legal and regulatory | Development
Roxana Radu
Speech speed
144 words per minute
Speech length
504 words
Speech time
208 seconds
Internet governance experience over 30 years provides mature framework for applying values to technical, policy and legal standards that AI governance lacks
Explanation
Roxana highlights the significant difference in maturity between internet governance and AI governance discussions. While internet governance has spent decades not just identifying core values but actually implementing and embedding them into practical standards and practices, AI governance is still primarily focused on identifying ethical principles without the same level of practical application.
Evidence
Contrasts 30+ years of internet governance development with current early-stage AI ethics discussions
Major discussion point
Bridging Internet Core Values and AI Governance
Topics
Legal and regulatory | Infrastructure
William Drake
Speech speed
175 words per minute
Speech length
1224 words
Speech time
418 seconds
Need to define precisely what aspects of AI require governance rather than applying generic high-level principles
Explanation
William argues that the AI field is too vast and diverse to apply broad governance principles uniformly across all applications. He emphasizes the need for careful investigation and mapping to determine which internet properties apply generally versus in specific contexts, rather than making assumptions about universal applicability.
Evidence
Points to the unlimited range of AI applications from medicine to environment and suggests need for close investigation and mapping
Major discussion point
Bridging Internet Core Values and AI Governance
Topics
Legal and regulatory
Private actors’ material interests make them unlikely to embrace externally originated constructs like neutrality and transparency
Explanation
William expresses skepticism about major AI companies voluntarily adopting governance frameworks that don’t align with their immediate profitability goals. He argues that these companies have demonstrated they will prioritize their business interests over external governance constructs, making multilateral cooperation challenging.
Evidence
References companies’ demonstrated behavior of prioritizing business interests, including ‘sponsoring military parades for dear leaders in Washington’
Major discussion point
Market Concentration and Gatekeeping Issues
Topics
Economic | Legal and regulatory
Must identify where there’s actual functional demand for international governance rather than assuming need based on technology existence
Explanation
William warns against assuming that new technology automatically creates demand for governance arrangements. He argues that successful international governance requires clear functional needs for coordination or harmonization, using historical examples from telecommunications where technical requirements drove cooperation.
Evidence
Uses historical examples of radio frequency spectrum and telecom network interconnection where technical necessity drove international cooperation
Major discussion point
Governance Implementation Challenges
Topics
Legal and regulatory
Disagreed with
– Andrew Campling
Disagreed on
Starting point for AI governance frameworks
Multilateral regulatory interventions face political obstacles, and binding international agreements may be unrealistic
Explanation
William points to current political realities that make international AI governance challenging, particularly noting that net neutrality is now prohibited in the US and that the G77 and China are demanding binding commitments from UN processes. He questions the feasibility of negotiating binding international AI agreements in the current political climate.
Evidence
Notes that net neutrality is ‘verboten in the United States now’ and references G77 and China’s demands for binding international commitments from UN AI processes
Major discussion point
Governance Implementation Challenges
Topics
Legal and regulatory
Disagreed with
– Yik Chan Ching
Disagreed on
Optimism vs pessimism about multilateral AI governance
Sandrine ELMI HERSI
Speech speed
125 words per minute
Speech length
791 words
Speech time
378 seconds
Many AI models remain “black boxes” requiring greater openness to research community for auditability and explainability
Explanation
Sandrine argues that despite some progress through sectoral initiatives and codes of conduct, many AI models lack sufficient transparency. She emphasizes that greater openness, particularly to researchers, is essential for improving both the auditability and explainability of AI systems, as well as their efficiency.
Evidence
References ARCEP’s ongoing technical hearings and file testing with data scientists, and notes some progress through sectoral initiatives and codes of conduct
Major discussion point
AI Transparency and Explainability Challenges
Topics
Legal and regulatory
Agreed with
– Renata Mielli
– Vint Cerf
– Hadia Elminiawi
Agreed on
Need for AI transparency and explainability to address opacity challenges
Disagreed with
– Hadia Elminiawi
Disagreed on
Feasibility of complete AI transparency and openness
Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services
Explanation
Sandrine argues that the non-discrimination principle originally applied to prevent ISPs from favoring their own services should now be extended to AI systems. She contends that today’s digital gatekeepers include not just ISPs but also AI systems that can narrow user perspectives and freedom of choice.
Evidence
References ARCEP’s work on assessing extension of non-discrimination principles and draws parallel to original ISP regulation
Major discussion point
Applying Network Neutrality Principles to AI
Topics
Infrastructure | Legal and regulatory
Disagreed with
– Renata Mielli
Disagreed on
Direct applicability of net neutrality principles to AI
Need to preserve plurality of economic players’ access to key inputs for AI development including data, computing resources, and energy
Explanation
Sandrine emphasizes the importance of maintaining competitive AI markets by ensuring diverse economic actors can access essential resources for AI development. This includes not just data but also the computational power and energy resources necessary for training and running AI models.
Evidence
References ARCEP’s investigation into preserving openness of AI markets
Major discussion point
Market Concentration and Gatekeeping Issues
Topics
Economic | Infrastructure
Need to ensure diversity of content when AI chatbots provide single answers instead of hundreds of web pages
Explanation
Sandrine highlights a fundamental shift in how users access information – from browsing multiple web pages to receiving single AI-generated responses. She argues this change requires ensuring that AI systems don’t simply amplify dominant sources but remain open to smaller and independent content creators.
Evidence
Contrasts traditional web search results (hundreds of pages) with AI chatbot responses (single answer)
Major discussion point
Applying Network Neutrality Principles to AI
Topics
Sociocultural | Legal and regulatory
Renata Mielli
Speech speed
111 words per minute
Speech length
674 words
Speech time
361 seconds
AI systems need transparency and explainability especially for social impact assessment and compliance processes, unlike the naturally open internet protocols
Explanation
Renata argues that AI governance requires specific principles like transparency and explainability that weren’t as critical for internet governance because the internet was built on naturally open, decentralized protocols developed collaboratively. AI systems, being opaque and centralized, require these additional transparency measures for social impact assessment and compliance.
Evidence
Contrasts internet’s open, decentralized, collaborative protocol development with AI’s opacity and centralization
Major discussion point
AI Transparency and Explainability Challenges
Topics
Legal and regulatory
Agreed with
– Sandrine ELMI HERSI
– Vint Cerf
– Hadia Elminiawi
Agreed on
Need for AI transparency and explainability to address opacity challenges
Net neutrality principles may not directly apply to AI since AI technology itself is not neutral unlike internet infrastructure
Explanation
Renata identifies a fundamental difference between internet infrastructure and AI technology in terms of neutrality. While net neutrality was designed for telecommunications infrastructure that could be neutral, AI technology itself is inherently non-neutral, making direct application of net neutrality principles problematic.
Evidence
Distinguishes between neutrality in telecommunications infrastructure versus the inherent non-neutrality of AI technology
Major discussion point
Applying Network Neutrality Principles to AI
Topics
Infrastructure | Legal and regulatory
Disagreed with
– Sandrine ELMI HERSI
Disagreed on
Direct applicability of net neutrality principles to AI
Need to transform principles into technical standards while distinguishing between governance and regulation
Explanation
Renata emphasizes the practical challenge of moving from high-level principles to implementable technical standards. She stresses the importance of understanding that governance and regulation are different concepts, with governance being multistakeholder while regulation requires national-level legal frameworks.
Major discussion point
Governance Implementation Challenges
Topics
Legal and regulatory | Infrastructure
Vint Cerf
Speech speed
133 words per minute
Speech length
1262 words
Speech time
567 seconds
Provenance of information used by AI agents and references must be available for critical evaluation of outputs
Explanation
Vint emphasizes the critical importance of being able to trace and verify the sources of information used by AI systems. He argues that users need access to the provenance of training data and references to conduct their own critical thinking and evaluation of AI outputs, particularly given concerns about hallucination and counterfactual information.
Evidence
References the problem of AI hallucination and generation of counterfactual output, emphasizing need for critical evaluation capabilities
Major discussion point
AI Transparency and Explainability Challenges
Topics
Legal and regulatory
Agreed with
– Sandrine ELMI HERSI
– Renata Mielli
– Hadia Elminiawi
Agreed on
Need for AI transparency and explainability to address opacity challenges
Agent-to-agent protocols and model context protocols are being developed to ensure interoperability among AI systems
Explanation
Vint describes emerging technical standards (A2A for agent-to-agent interaction and MCP for model context protocol) that aim to create interoperability between AI agents. These protocols are designed to provide clarity and confidence in semantic matching between agents, preventing the kind of information degradation that occurs in the telephone game.
Evidence
Explains A2A (agent-to-agent) and MCP (model context protocol) standards and uses the analogy of the telephone parlor game to illustrate communication degradation risks
Major discussion point
Technical Standards and Interoperability
Topics
Infrastructure | Digital standards
Agreed with
– Yik Chan Ching
– Audience
Agreed on
Importance of technical standards and interoperability for AI systems
Focus should be on risk to users and liability for providers, with high-risk applications requiring higher safety levels
Explanation
Vint advocates for a risk-based approach to AI regulation, where the level of safety requirements corresponds to the potential risk to users. He suggests that high-risk applications like medical diagnosis or financial advice should have stringent safety requirements, with providers demonstrating due diligence to reduce user risk.
Evidence
Provides examples of high-risk applications including medical diagnosis, medical treatment recommendations, and financial advice
Major discussion point
Risk-Based AI Governance Approach
Topics
Legal and regulatory | Cybersecurity
Agreed with
– Yik Chan Ching
– Alejandro Pisanty
Agreed on
Risk-based approach to AI governance with focus on user safety and provider liability
Shuyan Wu
Speech speed
120 words per minute
Speech length
483 words
Speech time
239 seconds
China Mobile’s experience shows importance of ensuring equal access, protecting user rights, and bridging digital divides in AI era
Explanation
Shuyan describes China Mobile’s comprehensive approach to digital inclusion, covering infrastructure development (5G networks reaching all villages), user protection (transparent services, fraud prevention), and targeted solutions for vulnerable groups (elderly, minors, rural communities). This experience is being adapted for AI governance to ensure universal access and inclusive benefits.
Evidence
Provides specific examples: China Mobile’s 5G network covering all villages, customized services for elderly and minors, 5G smart education for rural areas, AI-powered fraud detection, and smart village doctor systems
Major discussion point
Global South Perspectives and Digital Divide
Topics
Development | Infrastructure
Audience
Speech speed
147 words per minute
Speech length
544 words
Speech time
220 seconds
Need to focus on intersection of AI and internet where AI feeds on web content and produces web content
Explanation
The audience member from W3C suggests that rather than trying to govern all of AI, focus should be on the specific intersections between AI and the internet. This includes how AI systems consume web content for training, produce web content as output, and operate as agents within the web ecosystem.
Evidence
Mentions AI being fed from web content, web content being produced through AI, and AI being used as agents on the web
Major discussion point
Technical Standards and Interoperability
Topics
Infrastructure | Digital standards
Agreed with
– Vint Cerf
– Yik Chan Ching
Agreed on
Importance of technical standards and interoperability for AI systems
Duty of care and precautionary principle should be foundational building blocks for AI governance
Explanation
Andrew Campling argues that instead of starting with internet governance models, AI governance should be built on duty of care requirements and precautionary principles. He suggests these would be more practical and realistic foundations given the commercial realities and dominant players in the AI space.
Evidence
Draws comparison to social media governance challenges with dominant players disinterested in collaborative initiatives
Major discussion point
Risk-Based AI Governance Approach
Topics
Legal and regulatory | Cybersecurity
Disagreed with
– William Drake
– Andrew Campling
Disagreed on
Starting point for AI governance frameworks
Alejandro Pisanty
Speech speed
167 words per minute
Speech length
718 words
Speech time
257 seconds
Need to apply existing rules for automated systems and medical devices to AI rather than creating entirely new frameworks
Explanation
Alejandro argues for leveraging existing regulatory frameworks rather than building AI governance from scratch. He suggests that many rules already exist for automated systems, medical devices, and government procurement that can be extended or modified to address AI-specific concerns like discrimination and harm, rather than creating completely new regulatory structures.
Evidence
Provides examples of existing government purchasing rules requiring non-discriminatory systems, medical device regulations for automated systems, and established approaches to handling uncertainty and probability in regulation
Major discussion point
Risk-Based AI Governance Approach
Topics
Legal and regulatory
Agreed with
– Vint Cerf
– Yik Chan Ching
Agreed on
Risk-based approach to AI governance with focus on user safety and provider liability
Yik Chan Ching
Speech speed
144 words per minute
Speech length
500 words
Speech time
207 seconds
Risk assessment, safety issues, and liability mechanisms are crucial for holding AI developers accountable
Explanation
Yik Chan emphasizes three key areas for AI governance based on PNAI’s research: risk assessment as a fundamental approach, safety as a critical concern, and liability as the mechanism to ensure AI developers and deployers remain accountable for their systems’ impacts.
Evidence
References PNAI’s three years of research and reports on liability, interoperability, and environmental protection
Major discussion point
Risk-Based AI Governance Approach
Topics
Legal and regulatory
Agreed with
– Vint Cerf
– Alejandro Pisanty
Agreed on
Risk-based approach to AI governance with focus on user safety and provider liability
AI standards development is progressing significantly in EU, China, and other regions, particularly around safety issues
Explanation
Yik Chan highlights the substantial progress being made in AI standardization efforts globally, with particular emphasis on safety standards. He notes that standards will play a crucial role in AI regulation and points to developments in multiple jurisdictions including the EU’s AI Act standards and China’s safety-focused standards.
Evidence
References EU AI Act standards announcements and China’s progress on safety and other AI-related standards
Major discussion point
Technical Standards and Interoperability
Topics
Infrastructure | Digital standards
Agreed with
– Vint Cerf
– Audience
Agreed on
Importance of technical standards and interoperability for AI systems
Early intervention and precautionary approaches in AI governance benefit from lessons learned from social media governance failures
Explanation
Yik Chan argues that the AI governance community is more prepared than previous technology governance efforts because of lessons learned from social media. He suggests that having vibrant discussions and early intervention from multiple stakeholders (civil society, academia, industry) represents a more precautionary approach than was taken with social media.
Evidence
Contrasts current multi-stakeholder AI discussions with past social media governance approaches
Major discussion point
Governance Implementation Challenges
Topics
Legal and regulatory
Disagreed with
– William Drake
Disagreed on
Optimism vs pessimism about multilateral AI governance
Olivier Crepin-Leblond
Speech speed
144 words per minute
Speech length
774 words
Speech time
321 seconds
Interactive multi-stakeholder sessions are essential for effective governance discussions on bridging internet and AI governance
Explanation
Olivier emphasizes the importance of creating interactive forums where diverse speakers can present different angles on complex governance topics, followed by broader community discussion. He advocates for inclusive participation where attendees can join the discussion table and contribute to the dialogue.
Evidence
Organizes joint session between Dynamic Coalition on Core Internet Values and Dynamic Coalition on Network Neutrality with multiple speakers and open floor discussion
Major discussion point
Governance Implementation Challenges
Topics
Legal and regulatory
Time constraints require focused and efficient discussion formats to address complex governance challenges
Explanation
Olivier recognizes that meaningful governance discussions must balance thoroughness with practical time limitations. He structures the session to maximize productive dialogue while acknowledging the need to move efficiently through different perspectives and community input.
Evidence
Notes having only 75 minutes for the session and manages time allocation between speakers, commenters, and open discussion
Major discussion point
Governance Implementation Challenges
Topics
Legal and regulatory
Continued engagement beyond formal sessions is crucial for advancing governance frameworks
Explanation
Olivier emphasizes that meaningful governance work extends beyond individual sessions and requires ongoing collaboration through established channels. He encourages participants to maintain engagement through mailing lists and future collaborative work to build on the discussions initiated during formal meetings.
Evidence
Invites participants to join DC mailing lists and continue discussions, emphasizing the importance of ongoing participation in future work
Major discussion point
Governance Implementation Challenges
Topics
Legal and regulatory
Agreements
Agreement points
Risk-based approach to AI governance with focus on user safety and provider liability
Speakers
– Vint Cerf
– Yik Chan Ching
– Alejandro Pisanty
Arguments
Focus should be on risk to users and liability for providers, with high-risk applications requiring higher safety levels
Risk assessment, safety issues, and liability mechanisms are crucial for holding AI developers accountable
Need to apply existing rules for automated systems and medical devices to AI rather than creating entirely new frameworks
Summary
Multiple speakers converged on the importance of implementing risk-based governance frameworks that prioritize user safety and establish clear liability mechanisms for AI providers, particularly for high-risk applications like medical diagnosis and financial advice.
Topics
Legal and regulatory | Cybersecurity
Need for AI transparency and explainability to address opacity challenges
Speakers
– Sandrine ELMI HERSI
– Renata Mielli
– Vint Cerf
– Hadia Elminiawi
Arguments
Many AI models remain “black boxes” requiring greater openness to research community for auditability and explainability
AI systems need transparency and explainability especially for social impact assessment and compliance processes, unlike the naturally open internet protocols
Provenance of information used by AI agents and references must be available for critical evaluation of outputs
Complete openness of AI models may be unrealistic given capital investment and could discourage innovation and raise security concerns
Summary
Speakers agreed that AI systems require significantly more transparency than current implementations provide, though they acknowledged practical challenges in achieving complete openness due to investment and security concerns.
Topics
Legal and regulatory
Importance of technical standards and interoperability for AI systems
Speakers
– Vint Cerf
– Yik Chan Ching
– Audience
Arguments
Agent-to-agent protocols and model context protocols are being developed to ensure interoperability among AI systems
AI standards development is progressing significantly in EU, China, and other regions, particularly around safety issues
Need to focus on intersection of AI and internet where AI feeds on web content and produces web content
Summary
There was strong agreement on the critical role of developing technical standards for AI interoperability, with recognition of ongoing global efforts in standardization and the need to focus on AI-internet intersections.
Topics
Infrastructure | Digital standards
Similar viewpoints
These speakers shared the view that while internet and AI are fundamentally different architectures, the core principles that made the internet successful should be adapted and applied to AI governance, particularly through extending network neutrality concepts.
Speakers
– Luca Belli
– Pari Esfandiari
– Sandrine ELMI HERSI
Arguments
Internet and AI are “two different beasts” – Internet built on open, decentralized architecture while AI is highly centralized and proprietary
Internet’s foundational principles of openness, decentralization, and transparency should guide AI governance as generative AI becomes a main gateway to content
Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services
Topics
Infrastructure | Legal and regulatory
Both speakers expressed skepticism about the feasibility of applying internet governance models to AI, emphasizing the need for more pragmatic approaches that account for commercial realities and dominant market players.
Speakers
– William Drake
– Andrew Campling
Arguments
Private actors’ material interests make them unlikely to embrace externally originated constructs like neutrality and transparency
Duty of care and precautionary principle should be foundational building blocks for AI governance
Topics
Legal and regulatory | Economic
Both speakers emphasized the importance of inclusive AI development that bridges digital divides while respecting national approaches, with focus on ensuring benefits reach underserved populations.
Speakers
– Hadia Elminiawi
– Shuyan Wu
Arguments
Regional and international AI strategies should align with and strengthen national sovereignties rather than conflict with them
China Mobile’s experience shows importance of ensuring equal access, protecting user rights, and bridging digital divides in AI era
Topics
Development | Legal and regulatory
Unexpected consensus
Limitations of direct application of internet governance principles to AI
Speakers
– Renata Mielli
– William Drake
– Hadia Elminiawi
Arguments
Net neutrality principles may not directly apply to AI since AI technology itself is not neutral unlike internet infrastructure
Need to define precisely what aspects of AI require governance rather than applying generic high-level principles
Complete openness of AI models may be unrealistic given capital investment and could discourage innovation and raise security concerns
Explanation
Despite the session’s goal of bridging internet and AI governance, there was unexpected consensus among speakers from different backgrounds that direct application of internet principles to AI faces significant practical and conceptual limitations.
Topics
Legal and regulatory | Infrastructure
Importance of leveraging existing regulatory frameworks rather than creating entirely new ones
Speakers
– Alejandro Pisanty
– Roxana Radu
– Yik Chan Ching
Arguments
Need to apply existing rules for automated systems and medical devices to AI rather than creating entirely new frameworks
Internet governance experience over 30 years provides mature framework for applying values to technical, policy and legal standards that AI governance lacks
Early intervention and precautionary approaches in AI governance benefit from lessons learned from social media governance failures
Explanation
There was unexpected agreement across speakers that AI governance should build upon existing regulatory experience and frameworks rather than starting from scratch, representing a pragmatic approach to governance development.
Topics
Legal and regulatory
Overall assessment
Summary
The discussion revealed significant agreement on the need for risk-based AI governance, transparency requirements, and technical standards development, while acknowledging fundamental challenges in directly applying internet governance principles to AI systems.
Consensus level
Moderate to high consensus on core governance needs (safety, transparency, standards) but significant disagreement on implementation approaches and the applicability of internet governance models. This suggests that while there is shared understanding of AI governance challenges, the path forward requires careful consideration of AI’s unique characteristics rather than simple adaptation of existing frameworks.
Differences
Different viewpoints
Feasibility of complete AI transparency and openness
Speakers
– Hadia Elminiawi
– Sandrine ELMI HERSI
Arguments
Complete openness of AI models may be unrealistic given capital investment and could discourage innovation and raise security concerns
Many AI models remain “black boxes” requiring greater openness to research community for auditability and explainability
Summary
Hadia questions whether requiring full transparency and open-source access to AI models is practical given massive capital investments and security risks, while Sandrine advocates for greater openness particularly to researchers for auditability purposes
Topics
Legal and regulatory | Cybersecurity
Direct applicability of net neutrality principles to AI
Speakers
– Renata Mielli
– Sandrine ELMI HERSI
Arguments
Net neutrality principles may not directly apply to AI since AI technology itself is not neutral unlike internet infrastructure
Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services
Summary
Renata argues that net neutrality cannot be directly applied to AI because AI technology is inherently non-neutral, while Sandrine advocates for extending non-discrimination principles from network neutrality to AI systems
Topics
Infrastructure | Legal and regulatory
Starting point for AI governance frameworks
Speakers
– William Drake
– Andrew Campling
Arguments
Must identify where there’s actual functional demand for international governance rather than assuming need based on technology existence
Duty of care and precautionary principle should be foundational building blocks for AI governance
Summary
William emphasizes the need to identify functional demand for governance before creating frameworks, while Andrew advocates for starting with duty of care and precautionary principles as foundational elements
Topics
Legal and regulatory
Optimism vs pessimism about multilateral AI governance
Speakers
– William Drake
– Yik Chan Ching
Arguments
Multilateral regulatory interventions face political obstacles, and binding international agreements may be unrealistic
Early intervention and precautionary approaches in AI governance benefit from lessons learned from social media governance failures
Summary
William expresses pessimism about the feasibility of multilateral AI governance given current political realities, while Yik Chan is more optimistic about early intervention approaches based on lessons learned from social media
Topics
Legal and regulatory
Unexpected differences
Neutrality of AI technology itself
Speakers
– Renata Mielli
– Sandrine ELMI HERSI
Arguments
Net neutrality principles may not directly apply to AI since AI technology itself is not neutral unlike internet infrastructure
Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services
Explanation
This disagreement is unexpected because both speakers come from regulatory/governance backgrounds and might be expected to align on extending internet governance principles to AI, but they fundamentally disagree on whether AI’s inherent non-neutrality prevents direct application of net neutrality principles
Topics
Infrastructure | Legal and regulatory
Feasibility of international AI governance
Speakers
– William Drake
– Hadia Elminiawi
Arguments
Private actors’ material interests make them unlikely to embrace externally originated constructs like neutrality and transparency
Regional and international AI strategies should align with and strengthen national sovereignties rather than conflict with them
Explanation
This disagreement is unexpected given both speakers’ extensive experience in international governance – William’s pessimism about private sector cooperation contrasts sharply with Hadia’s optimism about aligning international and national strategies
Topics
Legal and regulatory | Economic
Overall assessment
Summary
The main areas of disagreement center on the practical implementation of AI governance principles, the extent of transparency required, the applicability of existing internet governance frameworks to AI, and the feasibility of international cooperation
Disagreement level
Moderate to high disagreement level with significant implications – while speakers generally agree on the importance of applying internet values to AI governance, they fundamentally disagree on how to achieve this, suggesting that developing consensus on AI governance frameworks will require substantial additional work to bridge these conceptual and practical differences
Partial agreements
Partial agreements
Similar viewpoints
These speakers shared the view that while internet and AI are fundamentally different architectures, the core principles that made the internet successful should be adapted and applied to AI governance, particularly through extending network neutrality concepts.
Speakers
– Luca Belli
– Pari Esfandiari
– Sandrine ELMI HERSI
Arguments
Internet and AI are “two different beasts” – Internet built on open, decentralized architecture while AI is highly centralized and proprietary
Internet’s foundational principles of openness, decentralization, and transparency should guide AI governance as generative AI becomes a main gateway to content
Network neutrality non-discrimination principle should extend to AI infrastructure, models, and content curation to prevent privileging of certain services
Topics
Infrastructure | Legal and regulatory
Both speakers expressed skepticism about the feasibility of applying internet governance models to AI, emphasizing the need for more pragmatic approaches that account for commercial realities and dominant market players.
Speakers
– William Drake
– Andrew Campling
Arguments
Private actors’ material interests make them unlikely to embrace externally originated constructs like neutrality and transparency
Duty of care and precautionary principle should be foundational building blocks for AI governance
Topics
Legal and regulatory | Economic
Both speakers emphasized the importance of inclusive AI development that bridges digital divides while respecting national approaches, with focus on ensuring benefits reach underserved populations.
Speakers
– Hadia Elminiawi
– Shuyan Wu
Arguments
Regional and international AI strategies should align with and strengthen national sovereignties rather than conflict with them
China Mobile’s experience shows importance of ensuring equal access, protecting user rights, and bridging digital divides in AI era
Topics
Development | Legal and regulatory
Takeaways
Key takeaways
Internet’s foundational principles of openness, decentralization, and transparency can serve as signposts for AI governance, but require active adaptation since Internet and AI are ‘two different beasts’
AI governance faces fundamental tension between Internet’s open, distributed architecture and AI’s centralized, proprietary model controlled by few actors
Risk-based approach to AI governance should focus on user safety and provider liability, with high-risk applications requiring higher safety standards
Transparency and explainability are essential for AI systems but complete openness may be unrealistic due to investment concerns and security risks
Network neutrality principles of non-discrimination should extend to AI infrastructure and content curation to preserve diversity and prevent gatekeeping
Technical standards and interoperability protocols (like agent-to-agent and model context protocols) will be crucial for AI governance implementation
Global South perspectives and capabilities must be included in AI governance discussions to address existing asymmetries
AI governance should build on 30 years of Internet governance experience rather than starting from scratch, while recognizing what doesn’t directly apply
Multi-stakeholder governance approach is essential, but private actors’ commercial interests may limit participation in voluntary international standards
Resolutions and action items
Continue discussion through Dynamic Coalition mailing lists for interested participants
Develop detailed mapping matrix of which Internet properties apply to specific AI contexts and applications
ARCEP (French regulator) to complete ongoing technical report on applying Internet core values to AI governance
Focus on intersection points between AI and Internet rather than trying to govern all AI applications generically
Explore alternative transparency solutions like requiring open-source safety guardrails rather than full model openness
Unresolved issues
How to define and regulate new AI gatekeepers when traditional Internet governance models may not apply
Whether complete AI model transparency is realistic or desirable given investment requirements and security concerns
How to ensure meaningful participation of major AI companies in voluntary international governance frameworks
What specific aspects of AI actually require international coordination versus national regulation
How to balance innovation incentives with transparency and accountability requirements
Whether binding international AI agreements are feasible given current political climate
How to transform high-level principles into actionable technical standards and regulatory frameworks
How to address liability and responsibility in multi-agent AI systems
What constitutes functional demand for AI governance versus assumed need based on technology existence
Suggested compromises
Require open-source safety guardrails and published safety measures rather than full AI model transparency
Apply layered safeguards approach with AI algorithms monitoring other AI algorithms for responsible use
Focus on risk-based regulation where high-risk applications have stricter requirements rather than blanket AI rules
Extend existing regulatory frameworks (medical devices, purchasing rules) to AI applications rather than creating entirely new governance structures
Pursue sector-specific AI governance approaches rather than generic cross-cutting regulations
Combine national AI regulations with aligned international cooperation strategies that support rather than conflict with sovereignty
Start with duty of care and precautionary principles as foundational building blocks rather than comprehensive Internet governance models
Thought provoking comments
The Internet and AI are two different beasts. So we are speaking about two things that are two digital phenomenon, but they are quite different. And the Internet, as Pari was reminding us very eloquently, has been built on an open, decentralized, transparent, interoperable architecture that made the success of the Internet over the past 70 years… but the question here is how we reconcile this with a highly centralized AI architecture.
Speaker
Luca Belli
Reason
This comment crystallized the fundamental tension at the heart of the discussion – the architectural incompatibility between the internet’s foundational principles and AI’s current development trajectory. It moved beyond surface-level comparisons to identify the core structural challenge.
Impact
This framing established the central problematic that all subsequent speakers had to grapple with. It shifted the discussion from whether internet values could apply to AI, to how they could be reconciled with AI’s inherently different architecture. This tension became a recurring theme throughout the session.
Every time someone interacts with one of those [large language models], they are specializing it to their interests and their needs. So in a sense, we have a very distributed ability to adapt a particular large language model to a particular problem… And that’s important, the fact that we are able to personalize.
Speaker
Vint Cerf
Reason
This insight reframed AI from being purely centralized to having distributed elements through user interaction. It challenged the binary view of centralized vs. decentralized systems and introduced nuance about how users can maintain agency even within centralized AI systems.
Impact
This comment provided a counterpoint to concerns about AI centralization and influenced later discussions about user agency and the potential for maintaining some internet-like distributed characteristics in AI systems. It offered a more optimistic perspective on preserving user empowerment.
Is it realistic or even desirable to expect that all AI models be made fully open source? Given the amount of capital investment in these models, requiring complete openness could discourage investment in AI models, destroying a lot of economic value and hindering innovation… Is it truly responsible or logical to allow unrestricted access to tools that could be used to build weapons or plan harmful disruptive actions?
Speaker
Hadia Elminiawi
Reason
This comment introduced crucial practical and ethical constraints that challenge idealistic applications of internet openness principles to AI. It forced the discussion to confront real-world trade-offs between values like openness and safety/security concerns.
Impact
This intervention shifted the conversation from theoretical principle-mapping to practical implementation challenges. It introduced the concept of ‘layered safeguards’ and sparked discussion about alternative approaches to transparency that don’t require full openness, influencing the overall tone toward more pragmatic solutions.
What we’ve done in internet governance over the last 30 years is much more than identifying core values. We apply them, we’ve embedded them into core practices, and we are continuing to refine these practices day by day… With AI, there seems to be a preference for unilateral standards, the giants developing their own standards, sharing them through APIs, versus globally negotiated standards.
Speaker
Roxana Radu
Reason
This comment highlighted a critical difference in governance maturity and approach between internet and AI governance. It identified the shift from collaborative standard-setting to unilateral corporate control as a key challenge, moving beyond principles to examine governance processes themselves.
Impact
This observation redirected attention from what principles to apply to how governance processes differ between domains. It influenced subsequent discussions about stakeholder participation and the challenges of bringing AI companies to collaborative governance tables.
We simply can’t just assume, because the technology is there and the issues are there, that there’s a functional demand [for international governance]. You know, often people point to things and say, oh, there’s some new phenomena. We must have governance arrangements. But very often the demand for governance arrangements is not equally distributed across actors.
Speaker
William Drake
Reason
This comment challenged a fundamental assumption underlying the entire session – that AI governance is necessarily needed or wanted by key stakeholders. It introduced a dose of political realism about power dynamics and incentives that was largely absent from earlier idealistic discussions.
Impact
This intervention served as a reality check that sobered the discussion. It forced participants to consider not just what governance should look like, but whether it’s actually achievable given current power structures. This influenced the final discussions toward more pragmatic approaches and acknowledgment of constraints.
If you want to regulate large language models provided over the internet for chatbots… Why would OpenAI, Google, Meta, et cetera… why would they come together and agree to limit themselves in some way? Also to sit at the table with people who are their users or their clients, potentially their competitors if something arises from their innovation.
Speaker
Alejandro Pisanty
Reason
This comment cut to the heart of the governance challenge by questioning the fundamental incentive structures. It moved beyond technical and ethical considerations to examine the political economy of AI governance, highlighting why voluntary cooperation might be unrealistic.
Impact
This comment reinforced the realist turn in the discussion initiated by Drake and others. It contributed to a more sober assessment of governance possibilities and influenced the final recommendations toward focusing on areas where there might be actual incentives for cooperation, such as liability and risk management.
Overall assessment
These key comments fundamentally shaped the discussion by introducing increasing levels of realism and complexity. The session began with an optimistic framing about mapping internet values to AI governance, but these interventions progressively challenged assumptions, introduced practical constraints, and highlighted structural differences between the domains. The comments created a dialectical progression from idealism to realism, ultimately leading to more nuanced and pragmatic conclusions. Rather than simply advocating for applying internet principles to AI, the discussion evolved to acknowledge the fundamental tensions, power dynamics, and implementation challenges involved. This resulted in a more sophisticated understanding of the governance challenge and more realistic recommendations focused on specific areas like risk management, liability, and targeted interventions rather than wholesale principle transfer.
Follow-up questions
How do we define who are the new gatekeepers in AI and how to implement laws that may not even exist yet to regulate them?
Speaker
Luca Belli
Explanation
This addresses the fundamental challenge of identifying control points in AI systems and developing appropriate regulatory frameworks, which is crucial for applying internet governance principles to AI
What alternative solutions can we consider for AI transparency beyond making all models fully open source?
Speaker
Hadia Elminiawi
Explanation
This explores practical approaches to transparency that balance openness with security concerns and investment protection, which is essential for developing workable AI governance frameworks
How can we develop a detailed matrix mapping which internet properties apply generally or in specific AI contexts?
Speaker
William Drake
Explanation
This would provide a systematic framework for understanding how internet governance principles can be applied across different AI applications and contexts
What aspects of AI processes absolutely require international coordination or harmonization?
Speaker
William Drake
Explanation
This is critical for determining where international governance efforts should focus and where there is genuine functional demand for coordination
How do we bring different stakeholders, especially dominant AI companies, to the table for governance discussions?
Speaker
Alejandro Pisanty
Explanation
This addresses the practical challenge of creating incentives for major AI players to participate in governance frameworks that may limit their operations
How can we establish indelible ways to identify sources of content used to train AI models?
Speaker
Vint Cerf
Explanation
This is important for establishing provenance and accountability in AI systems, which is fundamental to trust and liability frameworks
How do existing web standards and expectations for user agents apply to AI-based agents?
Speaker
Dominique Hazel Monsieur
Explanation
This explores how established internet protocols and standards can be extended to govern AI agents operating on the web
How can we transform AI governance principles into technical standards?
Speaker
Renata Mielli
Explanation
This addresses the practical implementation challenge of moving from high-level principles to actionable technical specifications
What does ensuring transparent algorithms mean in practical terms for AI systems?
Speaker
Hadia Elminiawi
Explanation
This seeks to define concrete requirements for AI transparency beyond abstract principles
How can we ensure AI systems remain open to smaller and independent content creators rather than just amplifying dominant sources?
Speaker
Sandrine ELMI HERSI
Explanation
This addresses concerns about AI systems potentially concentrating power and reducing diversity in content and innovation
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.