Open Forum #30 High Level Review of AI Governance Including the Discussion

26 Jun 2025 09:00h - 10:00h

Open Forum #30 High Level Review of AI Governance Including the Discussion

Session at a glance

Summary

This discussion focused on the current state and future directions of global AI governance, featuring perspectives from government officials, international organizations, and private sector representatives. The panel was moderated by Yoichi Iida, former Assistant Prime Minister of Japan’s Ministry of Internal Affairs, who outlined the evolution of AI governance from early initiatives in 2016 through recent developments including the OECD AI principles, the Hiroshima AI process, and the UN Global Digital Compact.


Lucia Russo from the OECD emphasized three strategic pillars: moving from principles to practice, providing evidence-based policy guidance, and promoting inclusive international cooperation. She highlighted the merger of the Global Partnership on AI with the OECD, expanding membership to 44 countries including six non-OECD members. Abhishek Singh from India’s Ministry of Electronics stressed the importance of democratizing AI access, particularly for the Global South, advocating for equitable access to compute resources, inclusive datasets, and capacity building initiatives.


Juha Heikkila from the European Commission clarified that the EU AI Act regulates specific uses of AI rather than the technology itself, using a risk-based approach that affects only 15-20% of AI systems while maintaining innovation-friendly policies. Melinda Claybaugh from Meta emphasized the need to connect existing frameworks to avoid fragmentation and duplication, calling for a shift from principle development to practical implementation.


Ansgar Koene from EY highlighted the growing need for robust governance frameworks as organizations move AI from experimental to mission-critical applications. All participants agreed on the importance of moving from principles to practice, building capacity globally, and ensuring inclusive participation in AI governance discussions. The conversation concluded with recognition that while AI and internet governance share some similarities, AI governance faces unique challenges requiring specialized approaches tailored to diverse use cases and risk profiles.


Keypoints

## Major Discussion Points:


– **Evolution and Current State of Global AI Governance**: The discussion traced the development of international AI governance from early initiatives in 2016 through major frameworks like OECD AI Principles (2019), the EU AI Act (2023), and the Hiroshima AI Process, highlighting how governance has evolved to address new challenges posed by generative AI technologies.


– **Moving from Principles to Practice**: A central theme emphasized by multiple speakers was the critical need to translate established AI governance principles into concrete, actionable policies and implementation frameworks, including developing toolkits, assessment mechanisms, and practical guidance for organizations and governments.


– **Inclusivity and Global South Participation**: Significant focus on ensuring equitable access to AI technologies, compute resources, and decision-making processes for developing countries and the Global South, with emphasis on capacity building, democratizing AI access, and preventing concentration of AI power in a few companies and countries.


– **Interoperability and Avoiding Fragmentation**: Discussion of the challenge of coordinating multiple international AI governance frameworks while avoiding regulatory fragmentation, with emphasis on finding common ground, connecting existing initiatives, and streamlining efforts to prevent duplication.


– **Multi-stakeholder Collaboration and Implementation**: Examination of roles and responsibilities of different stakeholders (governments, international organizations, private companies, civil society) in implementing AI governance, with focus on transparency, accountability, and collaborative approaches to address global AI challenges.


## Overall Purpose:


The discussion aimed to assess the current landscape of global AI governance and chart a path forward for international cooperation. The panel sought to evaluate existing frameworks, identify priorities for different stakeholders, and explore how to effectively implement AI governance principles while ensuring inclusivity and avoiding regulatory fragmentation.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, characterized by mutual respect and shared commitment to responsible AI development. Speakers demonstrated alignment on core principles while acknowledging different approaches and challenges. The tone was professional and forward-looking, with participants building on each other’s points rather than expressing disagreement. There was a sense of urgency about moving from theoretical frameworks to practical implementation, but this was expressed through cooperative problem-solving rather than criticism of current efforts.


Speakers

**Speakers from the provided list:**


– **Yoichi Iida** – Former Assistant Prime Minister of the Japanese Ministry of Internal Affairs, Chair of the OECD Digital Policy Committee


– **Abhishek Singh** – Under-Secretary from the Indian Ministry of Electronics and Information Technology


– **Lucia Russo** – OECD Economist at AI and Digital Emerging Technologies Division


– **Ansgar Koene** – Global AI Ethics and Regulatory Leader from E&Y Global Public Policy


– **Melinda Claybaugh** – Director of Privacy and AI Policy from META


– **Juha Heikkila** – Advisor for International Aspects of Artificial Intelligence from European Commission


– **Audience** – Unidentified audience member who asked a question


**Additional speakers:**


– **Shinichiro Terada** – From the University of Takyushu, Japan (audience member who asked a question about AI governance compared to Internet governance)


Full session report

# Global AI Governance Discussion: From Principles to Practice


## Introduction and Context


This discussion examined the current state and future directions of global artificial intelligence governance, bringing together perspectives from government officials, international organisations, and private sector representatives. The panel was moderated by Yoichi Iida, former Assistant Prime Minister of Japan’s Ministry of Internal Affairs and current Chair of the OECD Digital Policy Committee.


The conversation focused on assessing existing international cooperation mechanisms, identifying priorities for different stakeholders, and exploring pathways for translating established principles into practical implementation while ensuring global inclusivity.


## Current State of AI Governance Frameworks


### OECD’s Evolution and Approach


Lucia Russo from the OECD outlined the organisation’s strategic evolution from establishing foundational principles in 2019 to providing comprehensive policy guidance. She emphasised three strategic pillars: moving from principles to practice, providing evidence-based policy guidance through initiatives such as the AI Policy Observatory, and promoting inclusive international cooperation.


A significant development has been the merger of the Global Partnership on AI with the OECD, expanding membership to 44 countries, including six non-OECD members (India, Serbia, Senegal, Brazil, Singapore, and one other). The OECD is developing a toolkit to help countries implement AI principles, though specific details about its format and functionality were not elaborated.


### EU AI Act and Regional Implementation


Juha Heikkila from the European Commission clarified that the EU AI Act regulates specific uses of AI rather than the technology itself, employing a risk-based approach. He explained that “about 80% according to our estimate, maybe even 85% of AI systems…would be unaffected” by the legislation, addressing misconceptions about its scope.


The EU’s engagement extends beyond its own regulatory framework to include participation in G7, G20, Global Partnership on AI, and various international summits, aiming to support global coordination while maintaining compatibility with EU objectives.


### Hiroshima AI Process Progress


The discussion highlighted progress in the Hiroshima AI process, with Lucia noting that 20 companies submitted reports to the OECD website on April 22nd, demonstrating industry engagement with the code of conduct and guiding principles agreed by G7 nations.


## Key Stakeholder Priorities


### Industry Perspective: Moving Beyond Principles


Melinda Claybaugh, Director of Privacy and AI Policy from Meta, stressed the importance of shifting focus from establishing additional principles to translating existing frameworks into actionable measures. She proposed three specific areas for continued work:


– Continuing to build policy toolkits


– Creating libraries of resources including evaluations and benchmarks


– Continuing the global scientific conversation


Ansgar Koene from EY emphasised the need for reliable, repeatable assessment methods for AI systems, highlighting the importance of standards development and transparency in evaluation methods.


### Government Priorities: Capacity and Implementation


Abhishek Singh, Under-Secretary from the Indian Ministry of Electronics and Information Technology, emphasised that operational implementation requires enhanced regulatory capacity for testing AI solutions and practical translation of agreed principles into concrete actions. He highlighted India’s efforts to make compute accessible at very low cost, noting that “high-end H100s, H200s are made available at a cost less than a dollar per GPU per hour.”


## Major Challenge: Democratising AI Access


### Global South Participation and Resource Access


Abhishek Singh articulated the challenge of ensuring that Global South countries become genuine stakeholders in AI decision-making processes rather than passive recipients of frameworks developed elsewhere. He emphasised the need for:


– Access to high-end compute resources


– More inclusive datasets that represent diverse global contexts


– A global repository of AI solutions, similar to digital public infrastructure models


Singh noted the current concentration of AI power “in a few companies within a few countries” and called for more democratic participation in AI governance and development.


### Infrastructure and Capacity Building


The discussion revealed significant challenges in ensuring equitable access to technical infrastructure necessary for AI development. Singh proposed creating a global depository of AI solutions that could enable more equitable AI development across different countries and contexts, addressing issues like deepfakes and misinformation that particularly affect developing nations.


## International Cooperation and Coordination


### Managing Framework Proliferation


Participants acknowledged both the benefits and challenges of multiple AI governance initiatives. While demonstrating international cooperation, there are concerns about potential fragmentation. Juha Heikkila noted that despite apparent multiplication of efforts, there are consistent elements such as risk-based approaches across different frameworks.


Melinda Claybaugh emphasised the risk of fragmentation for companies developing global technologies, highlighting the need for approaches that respect different national contexts while maintaining sufficient consistency for global deployment.


### Role of International Organisations


The conversation highlighted the important role of international organisations in facilitating coordination. Participants discussed emerging initiatives such as the UN Scientific Panel on AI, with Juha noting it as “quite a crucial component,” and mentioned two UN resolutions, “one led by US and one led by China.”


## AI Governance versus Internet Governance


An audience question from Shinichiro Terada from the University of Takyushu prompted discussion about differences between AI and Internet governance. Juha Heikkila explained that AI governance differs fundamentally because AI extends beyond Internet applications to include embedded systems, robotics, and autonomous vehicles, requiring different approaches tailored to AI-specific characteristics.


Despite these differences, Abhishek Singh suggested that AI governance should adopt multi-stakeholder principles from Internet governance while recognising that AI requires enhanced global partnership due to the concentration of control in fewer corporations.


## Future Directions and Commitments


### Immediate Next Steps


Several concrete commitments emerged from the discussion:


– India will host an AI Impact Summit in February, focusing on operationalising inclusive AI governance principles


– Continued development of the OECD toolkit for implementing AI principles


– Ongoing Hiroshima AI process reporting with industry participation


– Building libraries of evaluation resources and benchmarks for AI assessment


### Long-term Strategic Directions


The discussion pointed towards creating shared resources that could support more equitable AI development globally, including the proposed global repository of AI solutions. There was emphasis on building capacity building networks as outlined in Global Digital Compact implementation.


## Conclusion


The discussion revealed strong consensus on the urgent need to move from principle establishment to practical implementation of AI governance frameworks. While significant progress has been made in establishing international cooperation mechanisms, major challenges remain in ensuring equitable access to AI technologies and meaningful participation by developing countries.


Key areas requiring continued attention include addressing resource inequities, building regulatory capacity globally, and coordinating multiple governance frameworks to prevent fragmentation while respecting different national approaches. The path forward requires sustained commitment from all stakeholders and innovative approaches to resource sharing and capacity building that go beyond traditional models of international cooperation.


Session transcript

Yoichi Iida: The Capital of Philosophy Hi, Bishak. How are you? Good morning everybody! And good morning, good afternoon, good evening, probably, depending on the place where you are, to online participants. My name is Yoichi Iida, the former Assistant Prime Minister of the Japanese Ministry of Internal Affairs, and also the Chair of the OECD Digital Policy Committee. Thank you very much for joining us. Today we are discussing the current situation and also some foresight on global AI governance. We have very excellent speakers on my left side. So, let me introduce briefly my speakers before they take the floor and make their own self-introduction. So, from my end, first Dr. Ansuga Kone, the Global AI Ethics and Regulatory Leader from E&I Global Public Policy. Next to him, Mr. Abhishek Singh, the Under-Secretary from the Indian Ministry of Electronics and Information Technology. Thank you very much, Abhishek. Next to him, we have Lucia Russo, from OECD Economist at AI and Digital Emerging Technologies Division. So next to her, we have Ms. Melinda Kuebo, Director of Privacy and AI Policy from META. Thank you very much for joining us. And last but not least, we have Dr. Juha Heikkila, Advisor for International Aspects of Artificial Intelligence from European Commission. Thank you very much for joining us. So, AI governance. As all of you know, we are seeing rapid changes in technologies, but also the policy formulation. Japanese government started the international discussion on AI governance as early as year 2016, when we made a proposal on an international discussion on AI governance at G7 and also OECD. So, this proposal led to the agreement on first international and intergovernmental principle as OECD AI principles in the year 2019, and also the G7 discussion led to the launch of global partnership on AI GPA in the year 2020. Also, UNESCO started the discussion on ethical AI recommendations, and the European Commission started the discussion on AI governance framework, which led to the enactment of AI Act in the year 2023. After these years, we saw a rapid change in AI technology in particular near the end of 2022 rapid rise of CHAP-GPT and we saw a lot of new types of risks and the challenges brought by new AI technology that was the background why we started the discussion at G7 on Hiroshima process we wanted to respond to the new risks and the challenges brought by generative AI and near the end of the year G7 agreed on code of conduct and guiding principles of Hiroshima AI process and this effort led to the launch of reporting framework for code of conduct of Hiroshima process in the year 2024 and this year we saw 20 reports by AI companies publicized on OECD website on the 22nd of April. In the meantime UN also started the discussion on AI governance and we saw the agreement on UN resolutions on related to AI two resolutions one led by US and one led by China. UN also started the discussion on global digital compact which concluded in September 2024 and we are now in the process of GDC follow-up and also in the beginning of discussion on WSIS plus 20 review. So this is the rapid and the short history of AI governance Thank you very much for this wonderful discussion and over the last probably several years and against this background I would like to discuss with these excellent speakers on what are the priorities and the emphasis in these discussions for different stakeholders in AI ecosystem and what are their perspectives. So, let me begin with Lucia from OECD. So, what do you think your priorities and emphasis are in promoting international or global AI governance and what international initiatives and frameworks do you consider very significant at present and also for the future discussion for countries, for international organizations and other stakeholders? What is your view?


Lucia Russo: Thank you, Yoichi. Good morning and thank you my fellow panelists for this interesting discussion. As Yoichi mentioned, we have started working at the OECD together with countries like Japan and multi-stakeholder groups on international AI governance back in 2019 and we have continued that work throughout the years to move from these principles that were adopted by countries into policy guidance on how to put them into practice. And the role of the OECD has been since then to be a convener of countries and multi-stakeholder groups and to provide policy guide and analytical work to support this. evidence-based and understanding of the risks and opportunities of artificial intelligence. So I think in terms of the role for the OECD there are three main strategic pillars so this moving from principles to practice and that is undertaken through several initiatives that range from a broad expert community that is supporting our work and to providing metrics for policymakers and this is through our OECD or AI policy observatory that provides trends and data but also a database of national AI policies that allows countries to see what others are doing and also learn from experiences across the globe and third to promote inclusive international cooperation and in that regard a key major milestone was achieved last July 2024 when the global partnership on AI and OECD merged and joined forces to promote safe secure and trustworthy AI that would again broaden the geographic scope beyond OECD members. We have now 44 members of the global partnership on AI and these include six countries that are not OECD members including India, Serbia, Senegal, Brazil and Singapore and so the idea is that that broader geographic scope will also increase as we proceed and so that will foster even more effective inclusive conversations with these , and we have a lot of opportunities for these countries. And in terms of priorities that we see, of course, it was mentioned the Hiroshima AI process, and that is an initiative that we see as very prominent, because it allows having a common standardised framework for these principles that were adopted by the Japanese government, but more than that is also the transparency element that is very important, because it’s not only about committing to these principles, it’s also demonstrating that companies are acting upon these principles and sharing in a transparent way which concrete actions they are taking to put them into practice. And this is really important, because it’s not only for companies, it’s also for companies themselves to have a learning experience, and, again, both for countries, and for companies themselves that can share these initiatives and learn what they are doing in practice to promote the different principles that we see in the framework. So, these are the areas where the OECD will continue working. Evidence, inclusive, multi-stakeholder co-operation, and guidance on policies.


Yoichi Iida: Okay. Thank you very much. Actually, OECD AI principles agreed in 2019 paved a robust foundation for national and international AI governance. So, I think that was very much supportive, and also we learned quite a lot from these principles. Japan enacted a new AI law only last month, but there are a lot of reflections from AI principles of OECD into our own AI law. So thank you very much. So I would like to invite two speakers from governmental bodies. So now I turn to Abhishek. Thank you very much for joining us. From the government perspective, what do you think your priorities and emphasis in developing AI governance, and also what do you evaluate the current situation?


Abhishek Singh: Thank you. Thank you, Yoichi, and thank you for highlighting this very, very important issue of AI governance and how we can work together with the global community, especially with the work which is done at the OECD and in various forums, whether it’s the UN high-level advisory body on AI, the G7 Hiroshima process, the G20 initiatives at Brazil and now South Africa. So like the whole world together, we are trying to address a common issue with regard to how we can leverage the power of this technology, how we can use it for larger social good, how we can use it for enabling access to services, how it can lead to enabling, to empowering people at the last mile. So that has been the principle mantra of what we have been doing in India. We have a large country and we do believe that AI can be a kinetic enabler for empowering people and enabling access to education, healthcare, the remotest corners of the country in various languages and enabling a voice interface for empowering people. To do this, we need to have a balanced, pro-innovation, inclusive approach towards development of the technology. We need to ensure that access to AI, compute, the data sets, algorithms and other tools for building safe and trusted AI is Good morning, everyone, and welcome to this session of the Global South, where we’re going to be talking about how to make AI more equitable. Currently, the state of the technology is such that the real power of AI is concentrated in a few companies in a few countries. If you have to democratise this, if you have to kind of ensure that the country, the Global South, become a stakeholder in the conversations around, we need to have this principle ingrained in all the countries around the world. This principle is well ingrained in all the countries that we chaired and following last year in Serbia and coming in Slovakia, this principle is well ingrained. The OECD Inclusive Framework also that we came up for GP 2.0, it also defines that we need to become much more inclusive, we need to bring countries of Global South at the decision-making tables, and towards this, the initiatives at Global Digital Compact also define how do we actually make it happen, how do we ensure that a researcher in a remote corner in a low- and medium-income country has access to similar compute that a researcher in the Silicon Valley has. We need to create frameworks. The AI Action Summit that France co-chaired along with India, there was a concept of current AI that came in which required commitments, financial commitments, to build up an institutional framework for funding such initiatives, for adopting AI-based technology, so that is something that we need to continue, and as we move from the French Summit to the India Summit that we’ll be hosting next year in February, we’ll need to work with the entire AI community to institutionalize this. How do we ensure that, in India we are making compute accessible at a very low cost, like the high-end H100s, H200s are made available at a cost less than a dollar per GPU per hour. Can we build up a similar framework so that researchers in low- and medium-income countries also get access to something similar? Can we build up a data-sharing protocol, a protocol in which when models are trained, the data sets are much more inclusive, the data sets from context-sharing… . We have a model in the DPI ecosystem, there is a global depository of DPI solutions. Can we build up a global depository of AI solutions which can be accessible to more countries? That’s something that we need to work on when we are working at global governance frameworks. And there are tools. How do we do privacy enhancing? How do we do anonymization of data? How do we ensure that we are able to prevent the damage that deepfakes can cause? How do we build up a global repository of AI solutions which can be accessible to more countries? How do we build up a global repository of AI solutions which can be accessible to more countries? How do we build up a global depository of AI solutions which can be accessible to more countries? How do we ensure that we are able to prevent the damage that deepfakes can cause? How do we democracies across the world are facing this challenge of misinformation on social media? And AI sometimes becomes an enabler for that. Can we develop tools for watermarking AI content, can we develop global frameworks so that social media companies become part of this whole ecosystem, so we can prevent the risks that democracies have? And how do we build up a global repository of AI solutions which can be accessible to more countries? And how do we ensure that, including building capacities across the world, we will be able to build up an AI ecosystem that will be more fair, more balanced, more equitable? So we are working with the global community towards this, and I hope that this discussion will further contribute to creating such enabling frameworks.


Yoichi Iida: Thank you very much for the very comprehensive remark. I believe the ultimate objective of governance is to make use of this data. I’m the director of the Global Governance Framework and I’m here today to talk about AI as a technology, as much as possible, but also without concern. So this is a point we need to share and also the common objective of building up the Global Governance Framework. Having said this, Yuhua, people say, you know, AI act may be a little bit too strict and bringing the excessive regulation. I don’t know, what is your opinion and what is the priorities or requirement of EU?


Juha Heikkila: Thank you Yoichi and thank you very much for this invitation. So I think it’s very useful to understand that the AI Act does not regulate the technology in itself, it regulates certain uses of AI. So we have a risk-based approach and it only intervenes where it’s necessary. So there are these statements that it regulates AI, it doesn’t actually, it regulates certain uses of AI which are considered to be either too harmful or dangerous or too risky so there need to be some safeguards in place. So in fact it’s innovation friendly because about 80% according to our estimate, maybe even 85% of AI systems that we see around would be unaffected by it. And it applies equally to everyone placing AI systems on the EU market, whether they are European, Asian, American, you name it. So in that sense it creates a sort of level playing field and it prevents fragmentation. So we have uniform rules in the European Union, we don’t have a patchwork of rules. It’s not as if we wouldn’t have regulation without the AI Act because the member states of the European Union would have proceeded to regulate. But the regulation is just one aspect of our activities and it’s a common misconception that we only do regulation. We actually invest a lot in innovation, we’ve been doing that over the years and we’ve always done it. The third pillar, in addition to trust, regulation, excellence, innovation, research, etc. The third pillar is international engagement. We think that because some of the challenges, or many of them, related to AI are actually boundaries. They are global. We think that cooperation is both necessary and useful. So we want to be involved and we engage bilaterally and multilaterally to support the setting up of a global level playing field for trustworthy human-centric AI. And we build coalitions with those who share the objectives. We want to have AI for the good of us all. So we want to promote the responsible stewardship and democratic governance of AI. But we also do cooperation on technical aspects. So, for example, cooperation on AI safety, support to innovation and its take up in some key sectors. So we do this both bilaterally with a number of partner countries, which is increasing. But we’re also involved in all the key discussions. G7, so the Hiroshima process was already mentioned, Hiroshima Friends. G20, the Global Partnership on AI. So we are a founding member, the European Union is a founding member of the Global Partnership on AI. So we’ve been involved in that from the very, very beginning. Now, of course, in an integrated partnership with the OECD. And with the OECD, of course, we are involved in all the key working groups which relate to AI. We are a member of the Network of AI Safety Institutes. And we’ve been actively involved also in the summits, Bletchley, Seoul, Paris. And then the upcoming summit in India is, of course, also where we will be involved in. And, of course, we are also, via the member states, we are involved in. is the director of the Global Digital Compact. We are a global digital compact and we have a lot of work to do and we have a lot of work to do to promote the global digital compact and its implementation which is now in a critical phase. And basically we do this from two perspectives. On the one hand, we do it to promote our goals which I listed and then also to ensure that whatever conclusions, declarations and statements that are made in the global digital compact, we ensure that they are compatible with our strategy and compatible also with our regulation so that we don’t end up in a situation where we have international commitments which are somehow conflicting with what is our strategy in general


Ansgar Koene: and then our regulation in particular. So this is basically the rationale for our engagement and our involvement. Thank you.


Yoichi Iida: Thank you very much for the detailed explanation and we really understand, you know, the EU act is objected to pursuing the innovation-friendly environment across the EU region. And we also discussed in G7, you know, the different countries, different jurisdictions have different backgrounds, different social or economic conditions, so the approaches to AI governance have to be different from one from another, but still, that is why we need to pursue interoperability across different jurisdictions, different frameworks, and I’m personally impressed by the approach by the European Commission in the discussion on the code of practice, which is very open to all stakeholders, so the EU and it gives us a lot for our partners to discuss this case. Thank you very much. The private sector people were also very much impressed when they joined the discussion and submitted their comments, which were much reflected to the current text, and we are expecting the very good result from the discussion as a code of practice as part of the AI Act. Thank you very much. Now I turn to the other stakeholders. So Melinda, from the perspective of a big AI company, how do you evaluate the current situation of global AI governance? And also, what are the priorities or what are the requirements as a private company in the governance framework, and what do you expect?


Melinda Claybaugh: Thank you so much for the question, and thank you for the opportunity to be here. As you were giving the opening remarks and listing all of the frameworks and the acronyms and all of the principles and bodies that are involved here, it’s really remarkable the work that has gone on in the last couple of years in the international community on AI governance. And there’s just been an incredible proliferation of frameworks and principles and codes and governing strategies. And I think at this moment, it’s really important to consider connecting the dots. I think we don’t want to continue down the road of duplication and proliferation and continued putting down of principles. I think we’ve largely seen a similarity and a coherence of approach around the various frameworks that have been put out at a high level. And I think it’s really important at this point to think about how do we connect these frameworks and these principles. Thank you. I’m going to talk a little bit about how we connect these principles and these frameworks. Because if we do not think about that, then we are at risk. I think it was mentioned of fragmentation. And from a private company’s perspective, the challenge of running this technology and developing and deploying this technology that is global and doesn’t have borders, as we’re all familiar with, is the risk of the fragmentation of approach. And so I think it’s really important to think about what do we have in common and how do we draw connections between these principles. Another priority is really moving from principle to practice. And I’ve been encouraged to see this as a kind of theme in conversations throughout a few days here on AI governance. We have the principles, but how do we put them into practice? And I mean that in a few different ways. Of course, from a company’s perspective, what does it mean? And I’m encouraged by the work of kind of trying to translate some of these things into concrete measures. But I think also from a country’s perspective, countries that want to implement and deploy and really roll out AI solutions to public challenges, how do they do that? What is the toolkit of measures and policies and frameworks at a domestic level that is important to have in place? Things like an energy policy, scientific infrastructure and research infrastructure, data, compute power, all of those things are really important. How do companies make sure they have, how do countries make sure they have the right elements in place to really leverage AI? And then I think, of course, from the perspective of policy institutions, how do they… . And then we also have a lot of work to do to set out toolkits and frameworks to make sure that all stakeholders have the opportunity to adopt AI. And so I think I’m also encouraged as we think about moving from principle to practice that there seems to be a broadening of the conversation. I think in terms of the focus beyond some of the early principles, I think it’s important to make sure that we’re looking at the benefits as well as minimizing the risks. And I think it’s important, I think the Hiroshima AI principles and process were really important in ensuring that we’re looking at maximizing the benefits as well as minimizing the risks. And so what does that mean? And how do we expand the conversation beyond risks to make sure it’s benefits-based? And that means including a lot of stakeholders who haven’t been part of the conversation to make sure that we’re moving from principle to practice. So how do we do that? How do we do the AI impact summit? How do we include as many stakeholders as possible in the conversation? Civil society, you know, everyone from the global south.


Yoichi Iida: How do we include that, expand that conversation, and how do we make sure we’re moving to tangible, concrete impacts? And how do we make sure that we’re avoiding fragmentation and improving the interoperability? And also, the second point, from principle into actions. This is very important, and that’s exactly what we are now pursuing. For example, I understand OECD is making the efforts, the toolkit for AI principles. And also, Hiroshima process, thank you very much for those results. And also, I think, only because we have the What the companies are doing inside the company when they assess the risks and also takes take the countermeasures and Publicize what they are doing. So all those information are on the website of OECD now and There is a lot of learnings from the practical Information but still we found those reports a little bit difficult to read up and understand so this is another challenge for for practicality, but We I believe we are making the progress. So having listened to these Answers, what is your opinion and what is what do you evaluate the current situation? Sure, thank you very much and thank you for the invitation to be on this panel So reflecting on this space around AI governance Both from how we within EY are looking at this, but also from what we are seeing


Ansgar Koene: Amongst our private sector and public sector clients whom we are helping with setting up Their AI transformation and their governance frameworks around this We are seeing that especially as more and more of these organizations are moving from exploring possible uses of AI in test cases towards actually building it into mission critical use cases where failure of the AI system will either have a significant impact directly on consumers or citizens or Have significant impacts on the ability of the organization itself to operate it is becoming very critical For organizations to have the confidence that they have a good governance framework in place, which will allow them to assess and measure and understand the reliability of the AI system the I’m going to talk a little bit about the use cases for which it truly operates, what are the boundary conditions within which it should be used and where it should not be used, the kind of information that also people within the organization and people outside need to have in order to be able to use the AI systems correctly. And so if we reflect from that point of view of the need that organizations have to have a good governance framework for the use of AI onto these global exercises and global initiatives, I think there are effectively two dimensions in which these global initiatives are important. One is the direct one, which is things like the OECD AI principles helped all organizations to have a foundation that they could reflect on as they are thinking what are the key things that we need to have in our governance thinking. The G7 code of conduct has helped to elaborate that further and has helped to pinpoint in more detail what goes into questions such as what is good transparency, what is a way to think about inclusiveness for instance of the people that need to be reflected on when developing these systems. And now the Global Digital Compact also helps to provide a broader understanding of also the way to think about AI governance within the broader context of good governance in itself. But then there’s also the indirect way from the point of view of companies, the indirect way in which these global instruments of course help to make sure that different countries have a common base from where to approach how to create either regulations or voluntary guidelines, whatever works best within their particular context. But it gives a…


Yoichi Iida: Thank you very much, exactly what you said was we need to improve interoperability and coherence across different governance frameworks and we have to admit there are differences in approaches but we need the common foundation, probably as human centricity and democratic values and including transparency or accountability or data protection or whatever. So thank you very much for the comment and so we believe our approaches and the world is proceeding in the right direction by sharing the experiences and the knowledges and try to improve coherence and interoperability. Then we have different frameworks going on, so second question, what do you think you need to do as a stakeholder, what is your role and what is your strategy in coming years and in particular what do you expect from UN Global Digital Compact which is now discussing the global AI governance. So at this time I would like to start with Abhishek. Abhishek Thakur As I mentioned our strategy for AI implementation is to ensure that we use this technology for enabling access to I am the CEO and co-founder of the Global Digital Compact. We want to make it available to all services, to all Indians, in all languages, especially through voice.


Abhishek Singh: That will really empower people globally. What do we expect from the Global Digital Compact to make this a reality? We have a lot of expectations because we are catching up with the West in the evolution of this technology. How do we kind of enable access? Like the first request that we had, especially the U.S., because that’s where the whole other companies who own compute, 90% of it is controlled by one company, to ensure that we have access to at least 50,000 GPUs in India. That becomes one practical requirement that we have. Second is to build in, ensure that the models, which are again developed primarily in the West and Deep-Sea came in China, so all these models, how do they become more inclusive in the sense that how they are trained on data sets from across the world? So that becomes our third, second request. And the third, which is the most important part, is building capacities. How do we ensure that, and even Global Digital Compact document also talks about capacity building initiative, setting up a capacity building network. How do we ensure that skills and capacities in all countries are developed, enhanced, and further to be able to take up advantage of the evolving technologies? And then we also need to build safeguards, like the OECD principles are there for responsible AI, for ensuring safe, trustworthy development of AI. But to ensure that, one would need tools and even regulators, especially being in the government, when we feel that there’s a need to regulate, but then how do we enhance the regulatory capacity? Even if you want to test a particular solution, whether it meets the standards, meets the benchmarks, do you have the regulatory capacity to test that? Enhancing that, enhancing cooperation on that, will become very, very critical. So I would say that my asks with the Global Digital Compact and the UN process will be at the operational level. I’m the director of the Global South and the Global Community. The principles are largely agreed on. Everybody talks the same language at every forum. But how do we translate that talk into action? That would be the real requirement that we will have. And we are happy to work with the global community in making this a reality, not only for India,


Yoichi Iida: but for the entire Global South and the world community. OK, thank you very much. Inclusivity will be one of the key words in the coming month in global AI governance discussion. There is a lot of expectation for India’s AI Impact Summit next year. So thank you very much for the comment. And now I invite Melinda for your views. Thank you so much. So under the theme of moving from principles to practice, three ideas.


Melinda Claybaugh: One is to continuing to build policy toolkits, which I think the OECD is really well-placed to do, for countries that want to advance their AI adoption. Two, I think, is libraries of resources along the lines of evaluations and benchmarks and third-party resources of testing of AI that’s been done and really putting that in one place. There are a lot of entities engaged in this and I think building the knowledge base will be really important. And then third, I think, is really continuing the global scientific conversation. And I think on that point, this is where I lead into the global digital compact, the UN Scientific Panel on AI as an independent scientific body to continue research and conversation and making sure that we are having the best scientific voices coming together. And then the global dialogue on AI governance through UN forums. I think that is the convening power there is what’s really important in bringing the right stakeholders. is a member of the OECD, and she is going to talk to us about how OECD can help to bring these new standards to place. Okay. Thank you very much. Very important three points. So, Melinda mentioned OECD toolkit. So, now I would like to invite Lucia for your comment.


Yoichi Iida: Yes. Thank you.


Lucia Russo: Indeed, we have started this project to build a toolkit to implement the OECD principles, and it comes exactly from this demand to have more actionable resources that would guide countries on how to go from these agreed principles into concrete actions. And it was agreed by the ministerial council meeting at the OECD just at the beginning of June. And what is this toolkit going to do, and how it’s going to be built? It will be an online interactive tool that will allow users, we expect mostly government representatives to make use of these resources, by consulting and interrogating the large database that we have on national AI policies, but it will be a guided interaction that will allow countries to understand where they need to act. And that would concern both the more values-based section of the principles, but also the policy areas that include, as we have heard, issues around compute capacity, data availability, research and development resources. And it will guide countries through understanding their needs, but also what the priorities may be, and then provide suggestions that would be policy options that other countries in a similar way. . And we want to have a level of advancement or in a region that is the same as the country that is navigating this toolkit to have these suggestions on policy options and practices that have already been put in place and that have been proven effective. And so, on one hand, we want to build this user experience. On the other hand, we want to have a level of advancement or in a region that has already been put in place and that has been proven effective. And so, on the one hand, we want to enrich the repository of national policies and strategies that we already have for 72 jurisdictions on the OECD database of national strategies. And that is one of the, I think, the priorities that also we see that we need to build further upon this toolkit. And so, on the one hand, we want to have a level of advancement or in a region that has already been put in place and that has been proven effective. And so, on the other hand, we hope to have this increased cooperation on things such as this one. And the idea is to build this toolkit again through co-creation with countries, and for that, we are organizing this toolkit, and we hope to have a level of advancement or in a region that has already been put in place. And so that we better understand the needs, because, as we have heard, I think everyone is agreeing on the broader actions, but then, when it comes to practice, we better need to understand what are the challenges, and that is where we want to work with countries around these challenges. So, thank you very much. is where we want to put the focus on. We have also been advancing work on understanding AI uptake across sectors and again this is in view of moving from this conversation that is very broad into concrete applications and their understanding better what are the bottlenecks and what are the pathways to increase adoption when it comes to agriculture, when it comes to health care, when it comes to education for instance. And perhaps just to close on that point I think when it comes to the Hiroshima reporting framework it’s interesting to see that the framework doesn’t only talk about risk identification, assessment and mitigation. The last chapter also talks about how to use AI to advance human and global interests and it’s interesting to see that in this first reporting cycle by 20 companies there are initiatives that are reported on how companies are actually engaging with governments and civil societies to have projects that indeed foster AI adoption across these key sectors. So once again this will be priorities and we see this as the


Yoichi Iida: key actions moving forward. Okay thank you very much. Actually OECD principles, GPAY, Hiroshima process, all those initiatives are backed up by OECD secretariat so we look forward to working very closely in the future. So time is rather limited but first I invite Ansgar. So what is your point?


Ansgar Koene: Sure, well very much I’d like to echo the point that was made regarding the need to move from principles to practice. As well as the point around capacity building and within those well, I would like to also highlight the work that OECD is doing around the Incidence database which is really helping to get better understanding about Where are real failures within AI occurring as opposed to hypothetical ones? but also, I think it is very important for us to be supporting and Encouraging broader participation in the standards development in this space Which are often a key tool that industry uses in order to be able to understand how to actually Go towards implementation and it is a good reference point. So that industry feels yes This is an approved the wider community agrees that this is a good approach to do it however all of these things in order to really achieve their intended outcome of being able to provide end-users with a Confidence and trust in these kinds of systems will require also reliable repeatable Assessments that can be done on how these systems are being implemented how the government’s frameworks are being implemented and In order to have these we need Greater transparency as to what the particular assessments are intended to achieve and how they are doing this so that we have Expectation management so that users will understand really how to interpret what this assessment has actually tested for we need greater capacity building also within the community to build an ecosystem of Assessment and assurance providers in this space and we’ve seen some interesting work around that happening Also already in some jurisdictions such as the UK and the OECD is helping in this space as well and Effectively we we just need the community to be able to Provide clarity as Chairman of the Board of Governors of the Japanese Government What is a good governance framework, how to approach this, hence the standards, and how to assess whether it has achieved and been done in the appropriate way through things like assessments.


Yoichi Iida: Thank you very much. The engagement of all communities, including the civil society, is very, very important, and the multi-stakeholder approach is definitely essential. So we believe the role of IGF in AI governance is increasingly important. So, sorry for the time remaining, but Juha, what is the role of Europe, and how do you think Europe will be working with the world?


Juha Heikkila: So, we are of course very much involved in sort of also the discussions of the GDC, the Global Digital Compact, as I mentioned earlier, and I think to echo what Melinda said, we think that the scientific panel, the independent scientific panel, is quite a crucial component of this. I think that the text, the GDC text, is very useful. I think what was agreed last year in that regard was very successful, and we hope that that will be then translated into the implementation, the way that it was expressed, and we do that in the spirit of the text. And I think that in this regard also for the dialogue, the AI governance dialogue, we think it’s important that it doesn’t actually duplicate existing efforts, because there are quite a lot of them, and that’s why also in the GDC text it’s mentioned that it would be on the margins. Chairman of the Board of Governors of the United Nations, I think that would be very useful because I think overall there is some call for streamlining in terms of the number of events and initiatives and forums that we have in the international governance landscape in the area of AI. I think that this kind of multiplication is not necessarily sustainable in the long run. I think we have made partial steps forward in the integrated partnership that was formed between the Global Partnership on AI and the OECD. We welcome that because we had some overlap between the expert communities and also I think now that initiative has a better sense of purpose also backed by the structures of the OECD which make it more impactful from our perspective and we look forward to how that will develop further and it will also have then a role in taking these discussions to a greater audience and membership. One thing that I wanted to mention just very briefly is that despite this multiplication of efforts and the seeming almost chaotic nature if you like in some respects to exaggerate a bit, there are some sort of constants however and one of these constants is, and Melinda mentioned this as well, that they go in the same direction. One item, one aspect which has been included in many of them is the risk-based approach which I mentioned as the foundation of the AI Act but it’s also for example reflected in the Hiroshima AI process guiding principles and the code of conduct. It’s also reflected elsewhere in some of the statements that have been made in the summit. So, you know, we have some common ground, but I think it would be desirable over the long run to try and seek some convergence and streamline.


Yoichi Iida: Okay, thank you very much. So there are a lot of efforts going on, and the GDC is also one of them, and maybe U.S. is first to join it too. And the role of the UN will be very important, but we need to avoid duplication, and we need to streamline and focus our power on the most efficient way. So I hope in the development of AI governance discussions, the role of IGF will be very important, and this needs to be the place where the people get together and discuss not only Internet governance, but also AI governance, or digital technology governance, to be discussed by multistakeholders here in IGF. So thank you very much. And I wanted to take one question, but I’m not sure I’m allowed. We’ve run out of time, we’ve just got one minute. Just ask. Okay, please. Okay. Yeah, please. But maybe you need a microphone. We can hear. You can go there and ask. Oh, yeah, okay. Go there and ask. The IGF protocol. I’m sorry. Thank you very much for great discussions.


Audience: My name is Shinichiro Terada from the University of Takyushu, Japan. And I’d like to understand AI governance compared to the Internet governance. And when the Internet was spreading globally, there were various challenges such as supporting Thank you very much for this complicated question, but we want to answer it.


Juha Heikkila: Okay, you have it. So it is a very complicated question. I comment on sort of one aspect maybe and I let, of course, my fellow panelists to comment. But I think, broadly speaking, I heard this comment the day before yesterday that AI is on the Internet and therefore Internet governance, you know, is suitable for it. There is more to AI than what is on the Internet. Think of embedded AI, for example, robotics, intelligent robotics, autonomous vehicles, etc. So not all of AI is on the Internet. There may be some inspiration AI governance can take from the principles of Internet governance. But I think there are numerous issues related to AI governance which cannot be, if you like, taken over from Internet governance, which are specific to AI, which have sort of characteristics where you don’t find any matching aspects in Internet governance. So I would personally see those as broadly different with potentially some inspiration for AI governance taken from Internet governance.


Yoichi Iida: Thank you very much. I would broadly agree with him. The only thing that I would say is that AI and Internet are two different things.


Abhishek Singh: AI includes a lot more than Internet, as he mentioned. Use cases also and input wise also, as rightly mentioned. I am the co-founder and co-director of IIT Bombay and I am here to talk about AI Governance and how it can be improved. So, AI Governance is a multi-stakeholder organization which is controlled by a few corporations there. So, in order to make it more equitable and bring in the principles of Internet Governance to AI Governance, it will have to be multi-stakeholder. It will have to ensure that the way we approach towards managing AI Governance as more inclusive, it involves people who are technology providers as also people who are technology users. And when we are able to do that balance, we will be able to make it more fair, more balanced, more equitable and this will require a lot more of global partnership than what Internet Governance has done so far. But the frameworks and the mechanism, the protocols which Internet Governance Forum has evolved can be a good guiding light for working on the AI Governance principles.


Ansgar Koene: Maybe if I can just add one additional perspective perhaps which I think links closely to what Yoha mentioned as one of the themes that has been picked up across so many of the Governance approaches around AI which is the risk-based approach. Within AI, it is very much the risk depends on the use case, whereas because AI is a core kind of technology that you can use in so many different kinds of applications and application spaces, whereas the Internet in that sense is more of a uniform kind of thing. Any more?


Yoichi Iida: Okay. So, thank you very much. Time is up, but I hope you enjoyed the discussion and please send the uploads to the excellent speakers. Actually, this is too excellent to close now, but time is up, but thank you very much. You don’t believe, you know, they are giving the questions only in midnight yesterday. And we must also acknowledge the presence of His Excellency, the President of Mauritius who is there. Listen to him. Great. Thank you very much, His Excellency. Thank you. Okay. Thank you for watching.


Y

Yoichi Iida

Speech speed

112 words per minute

Speech length

2037 words

Speech time

1083 seconds

Japan initiated international AI governance discussions in 2016, leading to OECD AI principles (2019), Global Partnership on AI (2020), and the Hiroshima process responding to generative AI challenges

Explanation

Japan started international discussions on AI governance at G7 and OECD in 2016, which led to the first international and intergovernmental principles. This foundation enabled subsequent developments including the Global Partnership on AI launch and the Hiroshima process to address new challenges from generative AI technologies.


Evidence

OECD AI principles agreed in 2019, Global Partnership on AI launched in 2020, G7 Hiroshima process code of conduct and guiding principles agreed by end of year, reporting framework launched in 2024 with 20 reports by AI companies published on OECD website on April 22nd


Major discussion point

Evolution and Current State of Global AI Governance


Topics

Legal and regulatory


L

Lucia Russo

Speech speed

131 words per minute

Speech length

1146 words

Speech time

522 seconds

OECD has evolved from establishing principles in 2019 to providing policy guidance and analytical work, with three strategic pillars: moving from principles to practice, providing metrics through AI policy observatory, and promoting inclusive international cooperation

Explanation

The OECD serves as a convener of countries and multi-stakeholder groups, providing evidence-based understanding of AI risks and opportunities. The organization has developed three main strategic approaches to support implementation of AI principles through practical guidance and international cooperation.


Evidence

OECD AI policy observatory provides trends and data plus database of national AI policies, Global Partnership on AI and OECD merged in July 2024 creating 44 members including six non-OECD countries (India, Serbia, Senegal, Brazil, Singapore), expert community supporting work


Major discussion point

Evolution and Current State of Global AI Governance


Topics

Legal and regulatory | Development


Agreed with

– Abhishek Singh
– Melinda Claybaugh
– Ansgar Koene

Agreed on

Moving from principles to practice is the critical next step in AI governance


OECD is developing an interactive toolkit to help countries implement AI principles through guided policy options based on successful practices from similar jurisdictions

Explanation

The toolkit will be an online interactive tool allowing government representatives to consult a database of national AI policies through guided interaction. It will help countries understand where they need to act and provide policy suggestions from other countries with similar advancement levels or regional contexts.


Evidence

Toolkit approved by ministerial council meeting at OECD in June, will cover both values-based principles and policy areas including compute capacity, data availability, research and development resources, database covers 72 jurisdictions on national strategies


Major discussion point

Moving from Principles to Practice


Topics

Legal and regulatory | Development


The Global Partnership on AI merger with OECD expanded membership to 44 countries including six non-OECD members, broadening geographic scope for more inclusive conversations

Explanation

The merger achieved in July 2024 was a key milestone that broadened the geographic scope beyond OECD members to include developing countries. This expansion aims to foster more effective and inclusive conversations with a broader range of stakeholders.


Evidence

44 members total with six non-OECD countries: India, Serbia, Senegal, Brazil, Singapore, with expectation that broader geographic scope will continue to increase


Major discussion point

Inclusivity and Global South Participation


Topics

Development | Legal and regulatory


Agreed with

– Abhishek Singh
– Juha Heikkila
– Melinda Claybaugh

Agreed on

Need for inclusive international cooperation and avoiding fragmentation


A

Abhishek Singh

Speech speed

196 words per minute

Speech length

1445 words

Speech time

441 seconds

AI democratization requires ensuring Global South countries become stakeholders in decision-making, with access to compute resources, inclusive datasets, and capacity building initiatives

Explanation

Currently, AI power is concentrated in few companies and countries, requiring democratization to make Global South countries true stakeholders. This involves providing access to compute resources, ensuring training datasets are more inclusive of global contexts, and building institutional frameworks for funding and adoption.


Evidence

90% of compute controlled by one company, need access to at least 50,000 GPUs in India, high-end H100s and H200s made available at less than $1 per GPU per hour in India, AI Action Summit concept of current AI requiring financial commitments, India hosting AI Impact Summit in February next year


Major discussion point

Inclusivity and Global South Participation


Topics

Development | Infrastructure


Agreed with

– Lucia Russo
– Juha Heikkila
– Melinda Claybaugh

Agreed on

Need for inclusive international cooperation and avoiding fragmentation


Operational implementation requires tools for regulators, enhanced regulatory capacity for testing AI solutions, and practical translation of agreed principles into concrete actions

Explanation

While principles are largely agreed upon globally, the challenge lies in translating these into operational actions. This requires building regulatory capacity to test AI solutions against standards and benchmarks, and developing practical tools for implementation.


Evidence

Principles agreed at every forum with same language, need for regulatory capacity to test solutions against standards and benchmarks, requirement for tools for watermarking AI content and frameworks for social media companies to prevent misinformation


Major discussion point

Moving from Principles to Practice


Topics

Legal and regulatory | Development


Agreed with

– Lucia Russo
– Melinda Claybaugh
– Ansgar Koene

Agreed on

Moving from principles to practice is the critical next step in AI governance


Global Digital Compact should focus on operational level implementation, capacity building networks, and enhanced cooperation on regulatory tools rather than just principles

Explanation

The Global Digital Compact should move beyond principle-setting to address practical operational needs. This includes establishing capacity building networks, enhancing regulatory cooperation, and creating frameworks for skill development across all countries.


Evidence

Global Digital Compact document mentions capacity building initiative and setting up capacity building network, need for skills and capacities development in all countries, requirement for enhanced cooperation on regulatory capacity building


Major discussion point

International Cooperation and Framework Coordination


Topics

Development | Legal and regulatory


India requires access to high-end compute resources, more inclusive training datasets, and global repository of AI solutions to enable equitable AI development

Explanation

India’s strategy focuses on using AI for enabling access to services for all citizens in all languages, particularly through voice interfaces. This requires practical access to compute resources, datasets that reflect global diversity, and shared AI solutions.


Evidence

Request for access to at least 50,000 GPUs, H100s and H200s available at less than $1 per GPU per hour in India, models primarily developed in West and China need training on global datasets, concept of global depository of AI solutions similar to DPI ecosystem model


Major discussion point

Technical Infrastructure and Capacity Building


Topics

Infrastructure | Development


AI governance should adopt multi-stakeholder principles from Internet governance while recognizing that AI requires more global partnership due to concentration of control in few corporations

Explanation

AI governance can learn from Internet governance frameworks and mechanisms, but requires more extensive global partnership due to the concentrated control of AI technology. The approach should be multi-stakeholder, involving both technology providers and users to achieve fairness and equity.


Evidence

AI controlled by few corporations, need for balance between technology providers and users, Internet Governance Forum protocols and mechanisms can serve as guiding light for AI governance principles


Major discussion point

AI Governance vs Internet Governance Comparison


Topics

Legal and regulatory | Development


Agreed with

– Juha Heikkila
– Ansgar Koene

Agreed on

AI governance differs significantly from Internet governance


Disagreed with

– Juha Heikkila

Disagreed on

Scope and nature of AI governance compared to Internet governance


J

Juha Heikkila

Speech speed

149 words per minute

Speech length

1277 words

Speech time

511 seconds

EU’s AI Act regulates specific uses of AI rather than the technology itself, using a risk-based approach that affects only 15-20% of AI systems while maintaining innovation-friendly environment

Explanation

The AI Act takes a risk-based approach, only intervening where necessary for harmful, dangerous, or risky uses of AI. This creates a level playing field for all entities placing AI systems on the EU market regardless of origin, while avoiding excessive regulation that could stifle innovation.


Evidence

About 80-85% of AI systems would be unaffected by the Act, applies equally to European, Asian, American companies, prevents fragmentation by creating uniform rules across EU instead of patchwork of member state regulations


Major discussion point

Evolution and Current State of Global AI Governance


Topics

Legal and regulatory


EU engages bilaterally and multilaterally to support global level playing field for trustworthy AI, participating in G7, G20, Global Partnership on AI, and various summits while ensuring compatibility with EU strategy

Explanation

The EU’s international engagement is built on three pillars: trust/regulation, excellence/innovation, and international cooperation. The EU participates in all key international discussions to promote responsible stewardship and democratic governance of AI while ensuring alignment with its own regulatory framework.


Evidence

Founding member of Global Partnership on AI, involved in G7 Hiroshima process, G20 initiatives, Network of AI Safety Institutes, summits at Bletchley, Seoul, Paris, upcoming India summit, Global Digital Compact participation


Major discussion point

International Cooperation and Framework Coordination


Topics

Legal and regulatory


Despite seeming multiplication of efforts, there are constants like risk-based approaches reflected across frameworks, suggesting common ground but need for convergence and streamlining

Explanation

While there appears to be a chaotic multiplication of AI governance efforts, common elements like risk-based approaches appear consistently across different frameworks. This suggests underlying agreement but highlights the need for better coordination and streamlining of efforts.


Evidence

Risk-based approach reflected in AI Act, G7 Hiroshima process guiding principles and code of conduct, and other summit statements, integrated partnership between Global Partnership on AI and OECD as example of streamlining


Major discussion point

International Cooperation and Framework Coordination


Topics

Legal and regulatory


Agreed with

– Ansgar Koene

Agreed on

Risk-based approach as a common foundation across AI governance frameworks


AI governance differs from Internet governance because AI extends beyond Internet applications to embedded systems, robotics, and autonomous vehicles, requiring different approaches for AI-specific characteristics

Explanation

While AI may take some inspiration from Internet governance principles, AI encompasses much more than what operates on the Internet. AI includes embedded systems, robotics, and autonomous vehicles that have characteristics not found in Internet governance, requiring specific approaches.


Evidence

Examples of non-Internet AI: embedded AI, robotics, intelligent robotics, autonomous vehicles, numerous AI-specific issues without matching aspects in Internet governance


Major discussion point

AI Governance vs Internet Governance Comparison


Topics

Legal and regulatory | Infrastructure


Agreed with

– Abhishek Singh
– Ansgar Koene

Agreed on

AI governance differs significantly from Internet governance


Disagreed with

– Abhishek Singh

Disagreed on

Scope and nature of AI governance compared to Internet governance


M

Melinda Claybaugh

Speech speed

157 words per minute

Speech length

864 words

Speech time

328 seconds

The proliferation of AI governance frameworks shows remarkable international cooperation, but now requires connecting dots and avoiding fragmentation

Explanation

There has been an incredible proliferation of frameworks, principles, and codes in AI governance showing strong international cooperation. However, the focus should now shift to connecting these frameworks rather than continuing to create new principles, to avoid the risk of fragmentation for global technology deployment.


Evidence

Similarity and coherence of approach across various high-level frameworks, challenge of running global technology across fragmented regulatory approaches


Major discussion point

Evolution and Current State of Global AI Governance


Topics

Legal and regulatory


Agreed with

– Lucia Russo
– Abhishek Singh
– Juha Heikkila

Agreed on

Need for inclusive international cooperation and avoiding fragmentation


The focus should shift from establishing more principles to translating existing frameworks into actionable measures for companies, countries, and policy institutions

Explanation

Moving from principle to practice involves translating frameworks into concrete measures for companies, helping countries implement AI solutions for public challenges, and providing policy institutions with practical toolkits. This includes ensuring countries have necessary infrastructure like energy policy, research capabilities, and compute power.


Evidence

Need for energy policy, scientific infrastructure, research infrastructure, data, compute power for countries to leverage AI, broadening conversation beyond early principles to include benefits alongside risk minimization


Major discussion point

Moving from Principles to Practice


Topics

Legal and regulatory | Infrastructure


Agreed with

– Lucia Russo
– Abhishek Singh
– Ansgar Koene

Agreed on

Moving from principles to practice is the critical next step in AI governance


Expanding conversations beyond risks to include benefits requires involving stakeholders who haven’t been part of the discussion, particularly from civil society and Global South

Explanation

The Hiroshima AI principles and process were important in ensuring focus on maximizing benefits alongside minimizing risks. This requires expanding the conversation to include more stakeholders, particularly civil society and Global South participants, to achieve tangible impacts.


Evidence

Hiroshima AI principles focus on maximizing benefits as well as minimizing risks, need to include civil society and Global South in conversations, AI Impact Summit as example of inclusive stakeholder engagement


Major discussion point

Inclusivity and Global South Participation


Topics

Development | Legal and regulatory


UN Scientific Panel on AI and global dialogue on AI governance should avoid duplicating existing efforts while providing independent scientific research and convening power

Explanation

The UN’s role should focus on providing independent scientific research through the Scientific Panel and using its convening power for global dialogue on AI governance. However, this should be done carefully to avoid duplicating existing international efforts and initiatives.


Evidence

UN Scientific Panel on AI as independent scientific body, global dialogue on AI governance through UN forums, importance of convening power for bringing right stakeholders together


Major discussion point

International Cooperation and Framework Coordination


Topics

Legal and regulatory


Building policy toolkits, libraries of evaluation resources, and continuing global scientific conversation are essential for advancing AI adoption

Explanation

Three key areas for moving from principles to practice include developing comprehensive policy toolkits for countries, creating centralized libraries of AI evaluations and benchmarks, and maintaining ongoing global scientific dialogue. These resources help countries advance their AI adoption capabilities.


Evidence

OECD well-placed to build policy toolkits, need for libraries of evaluations and benchmarks and third-party testing resources, importance of continuing global scientific conversation


Major discussion point

Technical Infrastructure and Capacity Building


Topics

Development | Legal and regulatory


A

Ansgar Koene

Speech speed

151 words per minute

Speech length

884 words

Speech time

349 seconds

Companies need concrete governance frameworks to assess reliability and understand boundary conditions for mission-critical AI applications, with global initiatives providing both direct guidance and indirect harmonization across jurisdictions

Explanation

As organizations move from exploring AI to implementing it in mission-critical applications, they need confidence in governance frameworks that help assess AI system reliability and understand operational boundaries. Global initiatives provide direct guidance through principles and indirect benefits by helping countries create compatible regulations.


Evidence

Organizations moving from test cases to mission-critical applications where AI failure has significant impact, need to understand boundary conditions and provide correct usage information, OECD AI principles and G7 code of conduct providing foundation for organizational governance thinking


Major discussion point

Moving from Principles to Practice


Topics

Legal and regulatory


Agreed with

– Lucia Russo
– Abhishek Singh
– Melinda Claybaugh

Agreed on

Moving from principles to practice is the critical next step in AI governance


Standards development, reliable assessments, and transparency in evaluation methods require broader community participation and capacity building for assessment providers

Explanation

Effective AI governance implementation requires supporting broader participation in standards development, creating reliable and repeatable assessments, and building an ecosystem of assessment providers. This includes providing transparency about what assessments actually test and building community capacity for evaluation.


Evidence

OECD Incidence database helping understand real AI failures versus hypothetical ones, interesting work in jurisdictions like UK on building assessment ecosystems, need for expectation management so users understand assessment scope


Major discussion point

Technical Infrastructure and Capacity Building


Topics

Legal and regulatory | Digital standards


Risk-based approach in AI governance reflects use-case dependency, unlike Internet’s more uniform nature, making AI governance more complex and application-specific

Explanation

AI governance complexity stems from the fact that AI is a core technology applicable across many different use cases, where risk depends heavily on the specific application. This contrasts with Internet governance, which deals with a more uniform technology platform.


Evidence

Risk-based approach picked up across many AI governance frameworks, AI risk depends on use case while Internet is more uniform technology


Major discussion point

AI Governance vs Internet Governance Comparison


Topics

Legal and regulatory


Agreed with

– Juha Heikkila
– Abhishek Singh

Agreed on

AI governance differs significantly from Internet governance


A

Audience

Speech speed

122 words per minute

Speech length

51 words

Speech time

25 seconds

AI governance should learn from Internet governance experiences while recognizing the differences between the two domains

Explanation

The audience member from University of Takyushu questioned how AI governance compares to Internet governance, noting that when the Internet was spreading globally there were various challenges. This suggests interest in applying lessons learned from Internet governance to the emerging field of AI governance.


Evidence

Reference to challenges faced during global Internet expansion


Major discussion point

AI Governance vs Internet Governance Comparison


Topics

Legal and regulatory


Agreements

Agreement points

Moving from principles to practice is the critical next step in AI governance

Speakers

– Lucia Russo
– Abhishek Singh
– Melinda Claybaugh
– Ansgar Koene

Arguments

OECD has evolved from establishing principles in 2019 to providing policy guidance and analytical work, with three strategic pillars: moving from principles to practice, providing metrics through AI policy observatory, and promoting inclusive international cooperation


Operational implementation requires tools for regulators, enhanced regulatory capacity for testing AI solutions, and practical translation of agreed principles into concrete actions


The focus should shift from establishing more principles to translating existing frameworks into actionable measures for companies, countries, and policy institutions


Companies need concrete governance frameworks to assess reliability and understand boundary conditions for mission-critical AI applications, with global initiatives providing both direct guidance and indirect harmonization across jurisdictions


Summary

All speakers agree that while AI governance principles have been established across various frameworks, the urgent need now is to translate these principles into practical, actionable measures that can be implemented by companies, governments, and institutions


Topics

Legal and regulatory | Development


Need for inclusive international cooperation and avoiding fragmentation

Speakers

– Lucia Russo
– Abhishek Singh
– Juha Heikkila
– Melinda Claybaugh

Arguments

The Global Partnership on AI merger with OECD expanded membership to 44 countries including six non-OECD members, broadening geographic scope for more inclusive conversations


AI democratization requires ensuring Global South countries become stakeholders in decision-making, with access to compute resources, inclusive datasets, and capacity building initiatives


Despite seeming multiplication of efforts, there are constants like risk-based approaches reflected across frameworks, suggesting common ground but need for convergence and streamlining


The proliferation of AI governance frameworks shows remarkable international cooperation, but now requires connecting dots and avoiding fragmentation


Summary

Speakers unanimously agree on the importance of inclusive international cooperation that brings Global South countries into decision-making processes while avoiding fragmentation through better coordination of existing frameworks


Topics

Legal and regulatory | Development


Risk-based approach as a common foundation across AI governance frameworks

Speakers

– Juha Heikkila
– Ansgar Koene

Arguments

Despite seeming multiplication of efforts, there are constants like risk-based approaches reflected across frameworks, suggesting common ground but need for convergence and streamlining


Risk-based approach in AI governance reflects use-case dependency, unlike Internet’s more uniform nature, making AI governance more complex and application-specific


Summary

Both speakers recognize that risk-based approaches have emerged as a consistent element across different AI governance frameworks, providing common ground despite the complexity of AI applications


Topics

Legal and regulatory


AI governance differs significantly from Internet governance

Speakers

– Juha Heikkila
– Abhishek Singh
– Ansgar Koene

Arguments

AI governance differs from Internet governance because AI extends beyond Internet applications to embedded systems, robotics, and autonomous vehicles, requiring different approaches for AI-specific characteristics


AI governance should adopt multi-stakeholder principles from Internet governance while recognizing that AI requires more global partnership due to concentration of control in few corporations


Risk-based approach in AI governance reflects use-case dependency, unlike Internet’s more uniform nature, making AI governance more complex and application-specific


Summary

Speakers agree that while AI governance can learn from Internet governance principles, AI presents unique challenges requiring different approaches due to its broader applications beyond the Internet and concentrated control structure


Topics

Legal and regulatory | Infrastructure


Similar viewpoints

Both speakers emphasize the critical importance of including Global South countries and underrepresented stakeholders in AI governance discussions, moving beyond risk-focused conversations to include benefits and ensuring equitable access to AI resources

Speakers

– Abhishek Singh
– Melinda Claybaugh

Arguments

AI democratization requires ensuring Global South countries become stakeholders in decision-making, with access to compute resources, inclusive datasets, and capacity building initiatives


Expanding conversations beyond risks to include benefits requires involving stakeholders who haven’t been part of the discussion, particularly from civil society and Global South


Topics

Development | Legal and regulatory


Both speakers advocate for developing comprehensive toolkits and resource libraries that provide practical guidance for implementing AI governance principles, with OECD being well-positioned to lead this effort

Speakers

– Lucia Russo
– Melinda Claybaugh

Arguments

OECD is developing an interactive toolkit to help countries implement AI principles through guided policy options based on successful practices from similar jurisdictions


Building policy toolkits, libraries of evaluation resources, and continuing global scientific conversation are essential for advancing AI adoption


Topics

Development | Legal and regulatory


Both speakers stress the need for building regulatory and assessment capacity, including tools for testing AI systems and transparent evaluation methods that can be implemented by regulatory bodies

Speakers

– Abhishek Singh
– Ansgar Koene

Arguments

Operational implementation requires tools for regulators, enhanced regulatory capacity for testing AI solutions, and practical translation of agreed principles into concrete actions


Standards development, reliable assessments, and transparency in evaluation methods require broader community participation and capacity building for assessment providers


Topics

Legal and regulatory | Digital standards


Unexpected consensus

Innovation-friendly regulation approach

Speakers

– Juha Heikkila
– Melinda Claybaugh

Arguments

EU’s AI Act regulates specific uses of AI rather than the technology itself, using a risk-based approach that affects only 15-20% of AI systems while maintaining innovation-friendly environment


The proliferation of AI governance frameworks shows remarkable international cooperation, but now requires connecting dots and avoiding fragmentation


Explanation

It’s unexpected that a major tech company representative (Melinda) and EU regulator (Juha) would find such strong alignment on the innovation-friendly nature of regulation, with both emphasizing that current approaches avoid stifling innovation while providing necessary safeguards


Topics

Legal and regulatory


Streamlining and avoiding duplication of international efforts

Speakers

– Juha Heikkila
– Melinda Claybaugh
– Abhishek Singh

Arguments

Despite seeming multiplication of efforts, there are constants like risk-based approaches reflected across frameworks, suggesting common ground but need for convergence and streamlining


UN Scientific Panel on AI and global dialogue on AI governance should avoid duplicating existing efforts while providing independent scientific research and convening power


Global Digital Compact should focus on operational level implementation, capacity building networks, and enhanced cooperation on regulatory tools rather than just principles


Explanation

Unexpected consensus among government representatives from different regions (EU, US company, India) on the need to streamline international AI governance efforts rather than create more frameworks, showing pragmatic alignment across different stakeholder types


Topics

Legal and regulatory


Overall assessment

Summary

The discussion reveals strong consensus on key foundational issues: the urgent need to move from principles to practical implementation, the importance of inclusive international cooperation that brings Global South countries into decision-making, the adoption of risk-based approaches as common ground, and recognition that AI governance requires different approaches than Internet governance. There is also unexpected alignment between regulators and industry on innovation-friendly approaches and the need to streamline rather than proliferate international frameworks.


Consensus level

High level of consensus with significant implications for AI governance development. The alignment suggests that despite different stakeholder perspectives, there is substantial agreement on both the direction and methodology for advancing global AI governance. This consensus provides a strong foundation for coordinated international action, particularly in developing practical implementation tools, building inclusive frameworks, and avoiding regulatory fragmentation. The agreement spans both procedural aspects (how to govern) and substantive priorities (what to focus on), indicating mature understanding of the challenges and realistic pathways forward.


Differences

Different viewpoints

Scope and nature of AI governance compared to Internet governance

Speakers

– Juha Heikkila
– Abhishek Singh

Arguments

AI governance differs from Internet governance because AI extends beyond Internet applications to embedded systems, robotics, and autonomous vehicles, requiring different approaches for AI-specific characteristics


AI governance should adopt multi-stakeholder principles from Internet governance while recognizing that AI requires more global partnership due to concentration of control in few corporations


Summary

Juha emphasizes the fundamental differences between AI and Internet governance due to AI’s broader scope beyond Internet applications, while Abhishek focuses on adapting Internet governance principles to AI while addressing the concentration of corporate control


Topics

Legal and regulatory | Infrastructure


Unexpected differences

Limited disagreement on fundamental AI governance principles despite different jurisdictional approaches

Speakers

– All speakers

Arguments

Various arguments about implementation approaches but consistent agreement on core principles


Explanation

Surprisingly, there was minimal fundamental disagreement among speakers from different regions (EU, India, OECD, private sector) on core AI governance principles, with most differences being about implementation methods rather than underlying goals


Topics

Legal and regulatory


Overall assessment

Summary

The discussion showed remarkably low levels of fundamental disagreement, with most differences centered on implementation approaches rather than core principles. The main areas of difference were: technical approaches to capacity building, the relationship between AI and Internet governance, and specific mechanisms for Global South inclusion.


Disagreement level

Low to moderate disagreement level with high consensus on principles but varying approaches to implementation. This suggests strong foundation for international cooperation but potential challenges in coordinating diverse implementation strategies across different jurisdictions and stakeholder groups.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the critical importance of including Global South countries and underrepresented stakeholders in AI governance discussions, moving beyond risk-focused conversations to include benefits and ensuring equitable access to AI resources

Speakers

– Abhishek Singh
– Melinda Claybaugh

Arguments

AI democratization requires ensuring Global South countries become stakeholders in decision-making, with access to compute resources, inclusive datasets, and capacity building initiatives


Expanding conversations beyond risks to include benefits requires involving stakeholders who haven’t been part of the discussion, particularly from civil society and Global South


Topics

Development | Legal and regulatory


Both speakers advocate for developing comprehensive toolkits and resource libraries that provide practical guidance for implementing AI governance principles, with OECD being well-positioned to lead this effort

Speakers

– Lucia Russo
– Melinda Claybaugh

Arguments

OECD is developing an interactive toolkit to help countries implement AI principles through guided policy options based on successful practices from similar jurisdictions


Building policy toolkits, libraries of evaluation resources, and continuing global scientific conversation are essential for advancing AI adoption


Topics

Development | Legal and regulatory


Both speakers stress the need for building regulatory and assessment capacity, including tools for testing AI systems and transparent evaluation methods that can be implemented by regulatory bodies

Speakers

– Abhishek Singh
– Ansgar Koene

Arguments

Operational implementation requires tools for regulators, enhanced regulatory capacity for testing AI solutions, and practical translation of agreed principles into concrete actions


Standards development, reliable assessments, and transparency in evaluation methods require broader community participation and capacity building for assessment providers


Topics

Legal and regulatory | Digital standards


Takeaways

Key takeaways

Global AI governance has rapidly evolved from Japan’s 2016 initiative to multiple frameworks (OECD principles, Hiroshima process, EU AI Act, Global Digital Compact), showing remarkable international cooperation but now requiring coordination to avoid fragmentation


The critical phase is moving from establishing principles to practical implementation – companies, governments, and organizations need concrete toolkits, assessment methods, and operational guidance rather than more high-level frameworks


Inclusivity and democratization of AI are essential, particularly ensuring Global South participation through access to compute resources, inclusive datasets, capacity building, and meaningful involvement in decision-making processes


Risk-based approaches have emerged as a common foundation across different frameworks, suggesting convergence potential despite apparent multiplication of governance efforts


AI governance differs fundamentally from Internet governance due to AI’s broader applications beyond Internet-based systems, requiring specialized approaches while potentially adopting multi-stakeholder principles


International cooperation should focus on interoperability between different jurisdictional approaches while respecting diverse national contexts and regulatory frameworks


Resolutions and action items

OECD to develop an interactive online toolkit to help countries implement AI principles through guided policy options based on successful practices from similar jurisdictions


Continue the Hiroshima AI process reporting framework with companies providing transparent reports on their AI governance practices


Expand Global Partnership on AI membership beyond current 44 countries to increase Global South representation


India to host AI Impact Summit in February focusing on operationalizing inclusive AI governance principles


Build global repository of AI solutions accessible to more countries, similar to the DPI ecosystem model


Develop capacity building networks as outlined in Global Digital Compact implementation


Create libraries of evaluation resources and benchmarks for AI assessment that can be shared globally


Unresolved issues

How to effectively streamline and coordinate the proliferation of AI governance frameworks without losing momentum or excluding stakeholders


Practical mechanisms for ensuring Global South access to high-end compute resources (like H100s, H200s) at affordable costs


Specific implementation details for making AI training datasets more inclusive and representative of global contexts


How to enhance regulatory capacity in developing countries to test and assess AI systems against established standards


Balancing innovation-friendly approaches with necessary safeguards across different jurisdictional frameworks


Defining the exact role and scope of UN Scientific Panel on AI to avoid duplication with existing initiatives


Addressing the concentration of AI development power in few companies and countries while maintaining technological advancement


Suggested compromises

Adopt risk-based approaches that allow different jurisdictions to implement AI governance according to their contexts while maintaining common foundational principles


Focus UN Global Digital Compact discussions on operational implementation rather than creating new principles, building on existing frameworks


Streamline international AI governance forums over time while maintaining the successful integrated partnership model between Global Partnership on AI and OECD


Balance innovation promotion with risk mitigation by focusing governance on specific high-risk AI applications rather than regulating the technology broadly


Use multi-stakeholder approaches from Internet governance while adapting to AI’s unique characteristics and broader application scope


Develop interoperable frameworks that respect different national approaches while ensuring global coordination and knowledge sharing


Thought provoking comments

Currently, the state of the technology is such that the real power of AI is concentrated in a few companies in a few countries. If you have to democratise this, if you have to kind of ensure that the country, the Global South, become a stakeholder in the conversations around, we need to have this principle ingrained in all the countries around the world.

Speaker

Abhishek Singh


Reason

This comment was particularly insightful because it shifted the conversation from abstract governance principles to concrete power dynamics and equity issues. Singh highlighted the fundamental challenge that AI governance isn’t just about creating rules, but about addressing the concentration of technological power and ensuring meaningful participation from developing nations.


Impact

This comment significantly influenced the discussion’s trajectory by introducing the theme of inclusivity and democratization that became central to subsequent speakers’ remarks. It prompted other panelists to address capacity building, resource sharing, and the need for more equitable access to AI technologies. The comment also established the Global South perspective as a critical lens through which to evaluate governance frameworks.


I think at this moment, it’s really important to consider connecting the dots. I think we don’t want to continue down the road of duplication and proliferation and continued putting down of principles… And from a private company’s perspective, the challenge of running this technology and developing and deploying this technology that is global and doesn’t have borders, as we’re all familiar with, is the risk of the fragmentation of approach.

Speaker

Melinda Claybaugh


Reason

This observation was thought-provoking because it challenged the prevailing approach of creating multiple governance frameworks. Claybaugh identified a critical problem: the proliferation of principles without sufficient focus on implementation and interoperability, which creates practical challenges for global technology deployment.


Impact

This comment catalyzed a shift in the discussion from celebrating the various governance initiatives to critically examining their effectiveness and coherence. It introduced the concept of ‘fragmentation risk’ that other speakers then built upon, leading to discussions about streamlining efforts and improving interoperability between different jurisdictions’ approaches.


The AI Act does not regulate the technology in itself, it regulates certain uses of AI. So we have a risk-based approach and it only intervenes where it’s necessary… in fact it’s innovation friendly because about 80% according to our estimate, maybe even 85% of AI systems that we see around would be unaffected by it.

Speaker

Juha Heikkila


Reason

This clarification was insightful because it directly addressed widespread misconceptions about the EU AI Act being overly restrictive. By providing specific statistics and explaining the risk-based approach, Heikkila reframed the narrative around regulation from being innovation-stifling to being targeted and proportionate.


Impact

This comment helped establish a more nuanced understanding of regulatory approaches in the discussion. It influenced subsequent conversations about balancing innovation with safety, and provided a concrete example of how governance can be both protective and innovation-friendly, which other speakers referenced when discussing their own approaches.


We are seeing that especially as more and more of these organizations are moving from exploring possible uses of AI in test cases towards actually building it into mission critical use cases where failure of the AI system will either have a significant impact directly on consumers or citizens… it is becoming very critical for organizations to have the confidence that they have a good governance framework in place.

Speaker

Ansgar Koene


Reason

This comment was particularly valuable because it connected theoretical governance discussions to practical organizational needs. Koene highlighted the evolution from experimental AI use to mission-critical applications, emphasizing why governance frameworks must be reliable and actionable rather than merely aspirational.


Impact

This observation reinforced the ‘principles to practice’ theme that became central to the discussion. It provided concrete justification for why the governance frameworks being discussed matter in real-world implementation, and supported arguments made by other speakers about the need for practical toolkits and assessment mechanisms.


There is some call for streamlining in terms of the number of events and initiatives and forums that we have in the international governance landscape in the area of AI. I think that this kind of multiplication is not necessarily sustainable in the long run.

Speaker

Juha Heikkila


Reason

This was a bold and thought-provoking statement because it challenged the assumption that more governance initiatives are inherently better. Heikkila raised questions about the sustainability and effectiveness of the current proliferation of AI governance forums and frameworks.


Impact

This comment validated and expanded upon Claybaugh’s earlier concerns about fragmentation, creating a consensus around the need for consolidation and better coordination. It influenced the moderator’s closing remarks about the role of IGF and the importance of avoiding duplication, suggesting a potential path forward for more streamlined governance approaches.


Overall assessment

These key comments fundamentally shaped the discussion by introducing three critical themes that transformed it from a routine overview of governance initiatives into a more sophisticated analysis of systemic challenges. First, Singh’s emphasis on power concentration and Global South inclusion established equity as a central concern, influencing all subsequent speakers to address inclusivity and capacity building. Second, Claybaugh’s observation about fragmentation and the need to ‘connect the dots’ created a critical lens through which other speakers evaluated existing frameworks, leading to discussions about interoperability and streamlining. Third, the collective emphasis on moving ‘from principles to practice’ – reinforced by Koene’s practical perspective and supported by others – shifted the conversation from celebrating existing frameworks to critically examining their implementation challenges. These comments created a more mature, nuanced discussion that acknowledged both the progress made in AI governance and the significant challenges that remain, ultimately pointing toward more coordinated, inclusive, and practically-oriented approaches to global AI governance.


Follow-up questions

How can we ensure researchers in low- and medium-income countries have access to similar compute resources as researchers in Silicon Valley?

Speaker

Abhishek Singh


Explanation

This addresses the digital divide and democratization of AI technology access globally, which is crucial for inclusive AI development


Can we build up a global depository of AI solutions which can be accessible to more countries?

Speaker

Abhishek Singh


Explanation

This would facilitate knowledge sharing and prevent duplication of AI development efforts across different countries


How do we develop tools for watermarking AI content and global frameworks so that social media companies become part of preventing misinformation risks?

Speaker

Abhishek Singh


Explanation

This addresses the growing concern about AI-generated misinformation and deepfakes threatening democratic processes


How do we enhance regulatory capacity for testing AI solutions against standards and benchmarks?

Speaker

Abhishek Singh


Explanation

This is critical for ensuring AI systems meet safety and trustworthiness requirements before deployment


How do we connect different AI governance frameworks to avoid fragmentation and improve interoperability?

Speaker

Melinda Claybaugh


Explanation

This addresses the proliferation of different AI governance approaches that could create compliance challenges for global AI deployment


How do we expand the conversation beyond risks to include benefits and involve more stakeholders from civil society and the Global South?

Speaker

Melinda Claybaugh


Explanation

This ensures AI governance discussions are balanced and inclusive of diverse perspectives and use cases


How do we build reliable, repeatable assessments for AI systems implementation and governance frameworks?

Speaker

Ansgar Koene


Explanation

This is essential for providing end-users with confidence and trust in AI systems through standardized evaluation methods


How do we streamline the multiplication of AI governance efforts and forums to avoid duplication?

Speaker

Juha Heikkila


Explanation

The current landscape has numerous overlapping initiatives that may not be sustainable long-term and could lead to inefficiencies


How can principles of Internet governance be applied to AI governance, considering AI includes more than just Internet-based applications?

Speaker

Shinichiro Terada (audience member)


Explanation

This explores whether existing governance models can be adapted for AI, while recognizing the unique challenges AI presents beyond Internet governance


How do we make AI governance more multi-stakeholder and inclusive like Internet governance, while addressing the concentration of AI power in few corporations?

Speaker

Abhishek Singh (in response to audience question)


Explanation

This addresses the need for more democratic and distributed approaches to AI governance to prevent monopolization


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.