Policymaker’s Guide to International AI Safety Coordination

20 Feb 2026 17:00h - 18:00h

Policymaker’s Guide to International AI Safety Coordination

Session at a glance

Summary

This discussion centered on international coordination for AI safety governance, featuring leaders from the OECD, Singapore, Malaysia, the World Bank, and AI Safety Connect at a summit in New Delhi. The conversation focused on addressing the growing gap between rapidly advancing AI technology and the slower pace of safety measures and regulatory frameworks.


OECD Secretary General Mathias Cormann emphasized that building trust through inclusion and objective evidence is crucial for AI governance success. He highlighted the importance of coordinated transparency and incident reporting, noting that 25 organizations across nine countries have already submitted reports under the Hiroshima AI Process Code of Conduct. Singapore’s Minister Josephine Teo stressed the need to translate scientific knowledge into practical policy, drawing parallels to aviation safety protocols and emphasizing the importance of rigorous testing and interoperable standards.


Malaysia’s Minister Gobind Singh Deo discussed the establishment of the ASEAN AI Safety Network under Malaysia’s 2025 ASEAN chairmanship, advocating for institutionalizing AI safety conversations to ensure sustainability as technology evolves rapidly. World Bank Vice President Sangbu Kim acknowledged the challenge of preparing developing countries for AI threats, emphasizing the need for close collaboration with advanced economies and companies to share expertise and protective measures.


Investor and AI safety advocate Jann Tallinn presented a more urgent perspective, expressing concern about the “cutthroat race” among top AI companies to develop superintelligence. He called for a slowdown in AI development until broad scientific consensus on safety is achieved, arguing that leading AI companies are now asking for such measures themselves. The panelists agreed that the next 12-24 months represent a critical window for establishing effective AI safety governance before capabilities advance beyond current evaluation and control mechanisms.


Keypoints

Major Discussion Points:

Urgent need for coordinated global AI safety governance: The discussion emphasized that AI technology is advancing rapidly while safety measures lag behind, creating a critical coordination gap that requires immediate international cooperation and standardized frameworks.


Role of middle powers and global majority states in AI governance: Rather than being passive recipients, countries like Singapore, Malaysia, and others can actively shape AI safety through pooled resources, market leverage, and regulatory innovation, with ASEAN’s AI Safety Network serving as a key example.


Translation of AI safety science into practical policy: Multiple speakers stressed the challenge of moving from theoretical frameworks to implementable policies, requiring extensive testing, simulation, and the development of practical safety tools that work across different contexts and jurisdictions.


Call for slowing down AI development and increasing transparency: There was significant discussion about the need to pause or slow down the race toward superintelligence, with calls for greater transparency from AI companies and more public awareness of the risks involved in current development trajectories.


Building sustainable institutions and incident reporting systems: The conversation highlighted the need for permanent institutional structures, coordinated incident reporting mechanisms, and sustained investment in AI safety research rather than relying on ad-hoc solutions.


Overall Purpose:

The discussion aimed to identify practical steps for enhancing international coordination on AI safety governance, particularly focusing on how middle powers and developing nations can play active roles in shaping global AI safety standards and closing the gap between rapid AI advancement and inadequate safety measures.


Overall Tone:

The tone was serious and urgent throughout, with speakers consistently emphasizing the critical nature of the current moment in AI development. While maintaining diplomatic professionalism, there was an underlying sense of concern about the pace of AI advancement outstripping safety measures. The tone remained collaborative and solution-focused, with speakers building on each other’s points about the need for coordinated action, though there was notable tension when discussing the competitive race between AI companies and the difficulty of implementing slowdowns.


Speakers

Speakers from the provided list:


Nicolas Miailhe – Founder of AI Safety Connect


Stuart Russell – Professor, Director of the International Association for Safe and Ethical AI (ICI)


Eileen Donahoe – Founder and managing partner of Sympathico Ventures, former U.S. Special Envoy and Coordinator for Digital Freedom and Ambassador to the UNHCR


Mathias Cormann – Secretary General of the OECD


Josephine Teo – Minister for Digital Development and Information at the Government of Singapore


Gobind Singh Deo – Minister from Malaysia (leading Malaysia’s 2025 ASEAN chairmanship)


Jann Tallinn – AI investor, founding engineer at Skype, co-founder of the Future of Life Institute


Sangbu Kim – Vice President for Digital and AI at the World Bank


Osama Manzar – Director of Digital Empowerment Foundation, co-host of the event


Additional speakers:


None – all speakers mentioned in the transcript were included in the provided speakers names list.


Full session report

This discussion on international AI safety governance took place at an AI Safety Connect summit in New Delhi, bringing together senior policymakers, international organization leaders, and AI safety advocates to address coordination challenges in global AI governance.


Opening Framework and Context

Nicolas Miailhe, founder of AI Safety Connect, opened by highlighting the fundamental challenge: the race towards artificial intelligence has moved beyond theoretical pursuit, with billions and maybe trillions of dollars now deployed to push AI frontiers while safety measures lag behind. AI Safety Connect was established to address this gap through regular global convenings every six months, alternating between AI summits and UN General Assembly meetings, alongside capacity building and trust-building exercises.


Stuart Russell introduced the International Association for Safe and Ethical AI (IASAI), emphasizing that AI harms cross borders and require global coordination. He noted that while AI development is concentrated in a few countries, the impacts are global, making international cooperation essential.


Eileen Donahoe framed the discussion around how middle powers and global majority states can exercise agency in AI safety governance, even when they don’t control the origins of frontier AI technology. She highlighted the importance of the International AI Safety Report and Singapore Consensus on Global AI Safety Research Priorities as examples of meaningful contributions from these nations.


Building Trust Through Inclusion and Evidence

OECD Secretary General Mathias Cormann emphasized that trust is built through inclusion and objective evidence. He identified a fundamental tension: while markets reward the private sector for speed, scale, and innovation, governments must manage risk and protect public interest without stifling progress. AI advances much faster than traditional policy cycles, creating gaps between innovation opportunities and necessary oversight.


Cormann stressed that all parties share a common interest in ensuring AI systems are trustworthy, as without public trust, even powerful AI tools will struggle to gain broad adoption. The OECD’s AI principles, first adopted in 2019 and now adhered to by 50 countries, represent the first globally recognized baseline for trustworthy AI. He identified coordinated transparency and incident reporting as the most critical frontier AI safety infrastructure, noting that 25 organizations across nine countries have already submitted reports under the Hiroshima AI Process Code of Conduct.


Middle Power Agency and Practical Policy Development

Singapore’s Minister for Digital Development and Information, Josephine Teo, demonstrated how smaller states can actively shape AI safety through strategic policy development. She emphasized translating scientific knowledge into practical policy, using aviation safety analogies to illustrate this complexity.


Minister Teo’s aviation safety comparison focused on Singapore’s experience with A380 aircraft operations, describing how safety decisions required extensive research and testing to understand how wake turbulence safety distances perform under different conditions. This analogy highlighted two critical points: safety decisions cannot be based on speculation but require rigorous evidence and testing, and the time required to move from scientific understanding to implementable policy is substantial.


She identified two immediate priorities: refreshing AI safety research priorities due to rapid field advancement, with Singapore planning a second edition of their consensus document, and developing better practical testing tools to move beyond frameworks toward implementable safety measures.


Regional Coordination and Institutional Sustainability

Malaysia’s Minister Gobind Singh Deo presented Malaysia’s establishment of the ASEAN AI Safety Network under the country’s 2025 ASEAN chairmanship. He identified three critical layers for effective AI governance: ensuring conversations about AI risks persist beyond current governments through institutionalization; building enforcement capabilities so standards don’t remain merely “strong on paper”; and maintaining expertise to anticipate next-generation technology risks.


Minister Gobind emphasized that while immediate risks like online fraud, scams, and deepfakes affect vulnerable populations across the region, the response must be coordinated and sustainable across political transitions.


Development Finance and Capacity Building

World Bank Vice President Sangbu Kim addressed challenges facing Global South countries, which often become passive recipients of frontier AI systems developed under different conditions and risk tolerances. He emphasized connecting developing countries with advanced practices, including partnerships with major technology companies operating “red teams” that attempt to attack their systems using AI.


Kim illustrated the challenge through a parable about merchants selling both impenetrable spears and impenetrable shields, highlighting that AI can be used both for sophisticated attacks and robust protection. He stressed that AI safety requires dedicated investment from the system design phase rather than treating protection as an afterthought.


The Superintelligence Race and Calls for Slowdown

Jann Tallinn, co-founder of the Future of Life Institute, focused on activities within leading AI laboratories, characterizing them as engaged in a “cutthroat race to build something that is smarter than they are.” He noted the paradox where leading AI company executives have called for development slowdowns but cannot implement them unilaterally due to competitive pressures.


Tallinn referenced a symbolic moment where AI leaders were reluctant to be photographed linking hands with political figures, indicating industry tensions. He identified two reasons for the current situation: the United States is conflicted because it relies on AI for competitive power, and the rest of the world doesn’t fully understand the danger’s magnitude.


The Future of Life Institute’s superintelligence statement has garnered over 130,000 signatures, aiming to create awareness and political demand for action. Tallinn argued that if sufficient pressure existed, the rest of the world collectively possesses more people and economic power than leading AI countries, making governance possible.


Critical Infrastructure and Implementation Priorities

The discussion revealed consensus around several critical infrastructure needs. Cormann advocated for comprehensive approaches across all areas, emphasizing the need to move quickly to catch up with technological advancement. The next development stage involves strengthening information sharing on AI failures and near misses, potentially evolving into an international AI Incident Response Center.


The Human-Centric Safety Imperative

Osama Manzar from the Digital Empowerment Foundation, representing grassroots perspectives from 40 million people reached over 23 years, reframed the safety discussion. He argued that AI safety should focus on “saving people from AI” rather than just technical safeguards, comparing it to road safety where the priority is protecting people from cars. He emphasized the need to preserve human intelligence in the face of artificial intelligence development.


The Critical Window and Next Steps

The discussion concluded with recognition of a narrow window before frontier AI capabilities advance beyond current evaluation and governance capacity. Miailhe noted the upcoming fourth edition of AI Safety Connect at the UN General Assembly in New York, emphasizing the need to translate high-level consensus into concrete, coordinated action.


The conversation demonstrated both the complexity of international AI safety coordination and potential pathways for effective action, while highlighting how middle powers and Global South countries can exercise meaningful agency in shaping AI safety outcomes through pooled resources, market leverage, and regulatory innovation.


Session transcript

Nicolas Miailhe

that the race towards artificial intelligence is no longer a theoretical pursuit. As billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence, the technology is now advancing rapidly. And safety is not keeping pace with it. There are wonderful opportunities on the other side of this quest. There are also big risks. And so that’s the purpose, that’s the reason AI Safety Connect was founded. AI Safety Connect is there to help shape the frontier AI safety and secure agenda towards what I would frame as commonsensical AI risk management. AI Safety Connect has been founded to encourage global majority engagement into frontier AI safety. And AI Safety Connect, has been connected to showcase Concrete’s governance coordination mechanisms, tools, and solutions.

So how we do this? We convene at each AI summit. So last year we started in Paris, this year in India, next year we’re going to be in Switzerland. But we also convene at the UN General Assembly, right? We need a faster tempo for these safety discussions, so every six months we have this global convening. We also do capacity building, and we also do trust building exercises at times behind closed doors. Well, this week in New Delhi has been an intense one, an impactful one. On Tuesday we had a full day of panels, conference, solution demonstrations, and closed -door workshop discussions on some specific nuts to crack to advance AI safety. We, for example, at the privilege of, hosting Prime Minister Dick Schuh from the Netherlands on stage to deliver a special address on the role of top leadership in advancing AI safety.

We also engage with industry, engage with academia. of India and abroad. So we’re an extremely busy week beside our main event. We had this closed -door discussion that I was mentioning yesterday and today, this closed -door scientific dialogues. We’re going to publish the results soon that brought together senior industry leaders to discuss shared responsibility for AI safety. Well, obviously, none of this would happen without partnership. And we want to thank our co -hosts, the International Association for Safe and Ethical AI and its director, Professor Stuart Russell, to whom I will hand over the floor in a few minutes, and the Digital Empowerment Foundation who is anchoring us at the grassroots here with Osama Manzar,. We’ll close the session later on.

And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moderate that panel and we’re thankful for that. The Future of Life Institute, Ima and Yann, who’s been supporting this effort, and the Mindero Foundation, whose team is here as well with team. And it’s great to have your support and we are thankful for that. So today we’re about to hear from His Excellency Matthias Korman who’s the Secretary General of the OECD We’re going to hear from Her Excellency Minister Josephine Theo who’s the Minister for Digital Development and Information at the Government of Singapore. Thank you for your continuous support, really appreciate that Same for Jann Tallinn who’s the AI investor but also a founding engineer at Skype and the co -founder of the Future of Life Institute And last but not least, we also have Minister Teo who’s going to be with us from Malaysia Minister for Digital Development and Information Thank you Minister as well as Vice President Kim for Digital and AI at the World Bank So an extremely important conversation to have And before we welcome you to the stage I would like to hand over the floor to Professor Stuart Russell to say a few words and to speak about also what’s happening next week in Paris Thank you so much.

Stuart Russell

Thank you very much, Cyrus and Nico. So as Nico mentioned, the International Association for Safe and Ethical AI, or ICI, the world’s worst acronym, is a global, democratic, scientific and professional society. We have several thousand members and approaching 200 affiliate organizations. Our mission is to ensure that AI systems operate safely and ethically for the benefit of humanity. And as Nico mentioned, our second annual conference will take place in Paris starting on Tuesday. It’s still, I think, possible to register, but we’re already up over 1 ,300 people coming. It’s at UNESCO headquarters in Paris. Thank you. So achieving this mission of ensuring… that AI systems operate safely and ethically is partly a technical challenge. How do we even build systems that have that property?

But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this panel is mainly about this second challenge. And I think it’s one on which global coordination is essential because the harms, whether it’s psychological damage to the next generation or loss of human control altogether, those harms cross borders. And we must coordinate to make sure that they don’t happen or they don’t originate anywhere. And it’s, I think, fitting that we are having this summit here in India, which has really, among other things, championed the idea that everyone on Earth should have a say. And so with that, I will hand over to Eileen. Thank you very much.

Nicolas Miailhe

Thank you, Stuart. So Dr. Eileen Donahoe is the founder and managing partner of Sympathico Ventures. She’s also the former U.S. Special Envoy and Coordinator for Digital Freedom and Ambassador to the UNHCR. Eileen? Welcome the speaker on the floor. Please, Your Excellency, Mr. Mattias Korman, Mr. Gobind Singh Deo, Mr. Josephine Teo, and Mr. Jann Tallinn, as well as Mr. Sangbu Kim, join us on stage.

Eileen Donahoe

Okay. Given this remarkable panel and the very short time we have, let me very briefly frame our discussion and get right to our speakers. So we’re here to share. Views on the opportunity for policymakers to impact international AI governance. As the race towards AGI and superintelligence intensifies, AI safety advocates face a compounding challenge. The technology is advancing rapidly and being deployed with minimal guardrails, while the risk management processes that do exist are either ill -adapted to the magnitude of the risk, fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators. The result is an unharmonized governance landscape that fails to shape the behavioral incentives. Of those building and funding frontier AI. Economies, governments, and societies do not respond well to such mixed signals.

While much of the discourse on frontier AI safety has focused on AI superpowers, there’s an urgent need for deeper international diplomacy on the most… extreme risks. At this juncture, middle powers and global majority states can’t be seen as peripheral actors in this landscape. Through pooled resources, market leverage, normative influence, and regulatory innovation, they can shape the direction of global AI practices and safeties. Leading from the middle may turn out to be a more powerful approach than previously anticipated. Whether or not that collective power is exercised now will determine whether international AI governance moves from the rhetorical level to the real -world impact on safety. This panel will aim to identify present -day coordination gaps in the global AI practice and the global market.

We will also look at the role of global AI in international AI safety and highlight practical steps policymakers can take in the coming months to close them. So to our panel, I’ll start with Secretary General Corman. The OECD has done remarkable work over the past decade, developing consensus on the OECD principles, providing a definition of AI systems that has resonated internationally, and playing an international role in operationalizing the Hiroshima International Code of Conduct. Along with those foundations, we now have the International AI Safety Report and the Singapore Consensus on Global AI Safety Research Priorities. With these principles, definitions, and frameworks in mind, two -part question for you. First, what are the key lessons learned from the process of building consensus and then implementing these frameworks?

And then second, looking ahead, what’s the most critical? What’s the most critical piece of coordinated frontier AI safety infrastructure we should be building now? Some have called for an international incident response center, and we’re all curious whether you think that should be a priority and achievable. Just some small, easy questions.

Mathias Cormann

In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is built through inclusion and on the basis of objective evidence. And, you know, I think what we’ve learned over the last few years is that bringing together all the relevant actors, governments, companies, civil society, technical experts, is what we need to do. I mean, each has a different perspective and different imperatives. I mean, markets reward the private sector for speed, scale, and innovation. While governments must manage risk and protect the public interest without stifling progress. But a challenge, and it’s been mentioned in some of the opening remarks, a challenge for policymakers in this context is that AI is moving much faster than policy cycles have traditionally moved, which easily then creates gaps between innovation and progress and opportunity, but necessary oversight, mitigation and management of risk.

But all sides in this conversation do share an essential common interest, and that is to ensure that the systems that are developing are trustworthy, because without public trust in the end, even the most powerful AI tools will struggle to gain broad adoption. So that means that occasionally, and it’s not always popular with everyone, but occasionally we should slow down. Occasionally we should actually pause. Pause, test, monitor, audit, share information, and take the time and invest in building confidence that these systems can work as intended and respect fundamental rights. So that’s sort of, I guess, the first point. another critical lesson involves international consistency and this is part of the reason why these sorts of summits are so important is to really facilitate these conversations among countries and among different jurisdictions because national priorities can vary quite widely and there’s of course fragmentation and compliance cost related risks and at the OECD really what we’ve been doing for six decades now across different policy areas is to try and reduce fragmentation and by achieving alignment around key principles, building shared evidence and facilitating the necessary conversations to develop a more coherent better coordinated approach moving forward and on AI I mean we’ve developed the OECD principles which were first adopted in 2019 and which are now adhered to by 50 countries around the world and that was really the first globally recognized baseline for trustworthy AI The OECD’s lifecycle definition of an AI system has since shaped policy frameworks from the EU AI Act to U.S. executive orders.

And we’ve had just earlier the meeting of the Global Partnership on AI co -chaired by Korea and Singapore. We’ve got the OECD AI Policy Observatory, which is sort of essentially the broad gamut of all of the different policy approaches around the world to provide countries and industries with data and evidence on what’s being done, facilitating peer learning, and trying to take some of the politics and the rhetoric out of it, but really looking at the facts. Now, looking ahead, and you sort of ask a question here about what to do about the risk. I mean, the most critical piece of frontier AI safety infrastructure is coordinated. transparency and incident reporting. I mean, the Hiroshima AI Process Code of Conduct and its reporting framework launched at the AI’s Action Summit in Paris last year.

You know, that’s a promising step, and we’ve got to continue to develop that. Since their publication, 25 organizations across nine countries have already submitted detailed reports on how they manage AI risks, offering for the first time a comparable view of developer practices across jurisdictions. The next stage is to strengthen information sharing on AI failures and near misses. The GPI AI Common Framework for Incident Reporting aims to help us collectively learn from mistakes before they scale globally, and over time, this could evolve into an international AI Incident Response Center, coordinating alerts between governments and labs without exposing companies to commercial or legal penalties for reporting in good flight. Finally, we do need to scale access to practical safety tools.

With global partners, the OECD recently launched an open call for open source safety and evaluation tools hosted in the OECD .ai catalog of tools and metrics to make a trustworthy AI easier to implement in practice. I mean, these are some initiatives to form the foundation of a more transparent, data -driven, and interoperable AI governance ecosystem, and

Eileen Donahoe

Excellent. Minister Teo, a number of questions for you, but let me start with the fact that Singapore occupies a very distinctive position in the global geostrategic landscape as a pro -innovation, advanced knowledge economy, with deep commercial and diplomatic ties to both the U.S. and China. Thank you. As the race to AGI intensifies and bilateral tensions mount, is there a role for Singapore and other middle powers to play in bridging the coordination gap to keep scientific and safety channels open? And also, what’s the most important step middle powers can take in the next 12 months to help establish a shared minimum understanding of frontier safety?

Josephine Teo

Well, thank you very much for that question. I think there is no running away from the fact that for smaller states, and that includes Singapore, the technology that our companies, our citizens are going to rely on do not originate from our shores. So they don’t necessarily come within our jurisdictions. We don’t always get to set the rules. Having said that, I do believe that we’re not without. Thank you. agency. It doesn’t mean that we take a step back and just let things happen to us. There are still things that we can do. One of the most important things I think as policymakers is for us to think about what it takes to translate what we know from science into policy.

And I wanted to just say why this is so important. In our case, as policymakers, the key questions will always be, are the policies that we make effective? And also, policies always come with trade -offs. With the question of effectiveness, there is always a need to understand what actually works, as opposed to what looks good on paper. With the question of trade -offs, it’s about understanding what we lose as a result of whatever safety aspects it is that we choose to put in place. And whether we can minimize them, can we mitigate them? Now, in areas where safety is the objective, we can’t just go with gut. We can’t just go with speculation. You take, for example, in my previous life, I was working on promoting Singapore’s Air Hub.

And we had to deal with a question of aviation safety. We were expanding our airport. It was going to carry many more passengers in and out of the country. But we are limited by number of runways. And in landscape Singapore, you can’t just click your finger and say, let’s build a new one. It’s a long runway. It’s very expensive anyway. Then there is the question of what do you do when you have these jumbo jets like A380s? Because each time an A380 hits the runway. It creates so much of a blast that you really need to create more distance between the A380 taking off and the next aircraft that is scheduled to take off. Now, this is not a question that the transport minister can just decide on a whim.

The air traffic management has to decide on its policy of how much distance is considered safe between landings or rather between takeoffs. And to answer this question, you really need to invest in the research. You need to invest in understanding the tests. So the science is one part of it. But between science to policy, you are actually going to need a lot of time. You are going to need a lot of tests. You are going to need a lot of simulations. you need to understand whether the distances that you decide are safe works well in a thunderstorm, a tropical thunderstorm. Does it work just as well in a snowstorm? Well, we don’t have snow in Singapore.

But you think about the airline that operates this. If each country that they fly into has a different safety distance, that creates some difficulty. So we therefore think that not only is there a need to invest in understanding the science, not only is there a need in understanding what testing looks like, what good testing looks like, there is also a need for us to think about what standards that will eventually be interoperable, what do they look like, which is why we think that international efforts, the collaboration that… that is being carried forward by the OECD through the Global Partnership on AI, the AI Safety Connect effort, and also ICI. Where is Stuart now? Those kinds of efforts, you can’t do away without.

At the outset, there is likely to be a bit of a fragmentation. And the trade -off with not having these conversations is that we are not even going to make advances in AI safety. And I don’t think that that’s a very good place for us to be in. It doesn’t give us the assurance that we can deliver to our citizens. And it does not create a foundation of trust that will eventually help us to push ahead with the use of this technology on a wider scale. So that’s how we are thinking about it, Aileen. Thank you.

Eileen Donahoe

So let me turn to Minister Gobind from Malaysia. and note that under your leadership and Malaysia’s 2025 ASEAN chairmanship, Malaysia succeeded in placing AI at the center of ASEAN’s agenda by establishing the ASEAN AI Safety Network. Malaysia is now finalizing its own AI National Action Plan, and Malaysia’s AI Governance Bill is expected in Parliament in 2026. So this dual -track approach of building national capacity while leading regional coordination represents a model of middle power agency that other countries are watching closely. So what lessons do you think other middle powers can draw from Malaysia’s experience? And on the ASEAN AI Safety Network, we have to note that operationalize and it will require sustained political will. technical capacity and resources.

So what concrete steps must ASEAN take in the next 12 to 18 months to ensure that this isn’t just aspirational?

Gobind Singh Deo

Online fraud, for example, scams, you have deepfakes today, you have huge concerns about certain vulnerable groups that are going to be impacted, children, older folk and so on and so forth. So this is something that stretches across the region. How do we deal with it in a coordinated way and ensure that the conversation doesn’t just stop with the government of the day, but it’s a conversation that expands over a period of time with clear policies that we can actually execute. The second layer that I think we need to think about is in the event there’s a need for execution. When we speak about risks in AI and we speak about how we’re going to govern these risks, we often talk about standards.

We often talk about regulation. We even speak about legislation at times for areas that pose higher risks. But ultimately, it really comes back down to you making sure you have an agency that can enforce it, because you can have the best standards. regulations and legislation but if there is no institution that’s really able to implement those standards to ensure that they are properly implemented and also to ensure that rules for failure to implement are enforced then those standards regulation and policies are really going to be just strong on paper but they’re not going to really have that impact that you need. So again, how do you build this mechanism across ASEAN where every country strengthens themselves domestically first and then moves across to the ASEAN member states and hopes to learn from their experiences so that we can together move ahead in this new world of AI and I think the threats that we anticipate in future.

Now the third part which is really important is also ensuring that whilst this goes on, you create those policies, you have institutions that enforce and the discussions persist at an ASEAN level. I think what is important is also to have that expertise looking at what comes next. We must make sure that our countries are prepared for the risks that are to come with the next generation technology. This is important because you don’t want a situation where new technology is adopted and there are risks that come with this new technology, you’re not prepared. I think that’s something we want to avoid and that’s the reason why I come back to where I started off. We really need to look at building institutions that have the expertise and of course are able to sustain as we go along and to build and deliver something that’s impactful.

Sorry, but that’s in short what we’re doing in Malaysia today.

Eileen Donahoe

Excellent. Thank you so much. Okay. Let me turn to Vice President Kim and talk about the World Bank, which has been at the forefront of digital public infrastructure, helping countries leapfrog legacy systems. We note that frontier AI systems, though, are arriving in the global south under very different conditions from previous waves of technology and governments are under pressure to deploy AI systems quickly. often using models that haven’t been adequately tested, let alone certified for their context, languages, or risk tolerances. So how can the World Bank help Global South countries move from being passive recipients of frontier AI to active shapers of safety and reliability requirements before the systems are deployed at scale?

Sangbu Kim

Thank you. In one word, definitely we need to make our clients well prepared from the scratch. When they design the AI systems, definitely they need to design the safety architecture within the system. That’s very, in general, that’s very correct. But real challenge is that… nobody can really expect a new type of new threat especially our some countries in a low capacity it is really hard to figure out what that will be so that’s the in order to tackle that type of irony and dilemma we need to very closely working with very developed economies company and government and very high end examples so that we can really well connect those good examples to the developing world so one partnership is one of the good examples we are helping our country for example some big tech company who is running some red teams so that you they are trying very hard to attack their system in advance by fully utilizing AI.

So through that type of practice and experiment, they can learn how to prevent the AI attack in the future, which is pretty much possible. So in this way, it is inevitable for our developing countries to keep track on the new trend and new innovation, even in this safety protection area. It is the only way. So I have to admit that this constraint. But think about this. Some anecdotal story in East Asia, in China and in Korea, there’s two models. Merchant who is selling two products. Number one is. sphere. And then they keep saying that this sphere is so strong so it can get through any kind of shield. So this is one vendor. The other vendor is selling shield.

And then they are saying that this shield is one of the most safe and strong shields. No sphere can get through this shield. This is exactly an ironical situation. If you think about AI, AI attack is the sphere. AI is so strong and smart and really capable so it can get through and hack any system with high -end intelligence and knowledge. But good news is that on the other hand, we also can build strong protective systems. by fully utilizing AI. So this is one good news, but the constraint is that we do not clearly know how AI can really evolve to fully protect those big attacks in the future. So in order to solve this type of ironical situation from the developing world point of view and from the World Bank point of view, this is the only way to very closely work and collaborate and learn from the advanced technology and advanced company and advanced country.

Eileen Donahoe

Thank you so much. Last but not least, Mr. Jan Tallinn, you occupy a very rare position in this landscape as a founding engineer of Skype, an early investor in DeepMind and Anthropic, and you’re also the co -founder of the Future of Life, which last October released a statement on superintelligence. calling for prohibition on superintelligent development until two conditions are met. Number one, broad scientific consensus that it can be done safely and controllably, and second, strong public buy -in. Let’s just ask the hard question. What would an effective prohibition look like in practice? How could that work?

Jann Tallinn

Thank you very much. So I think I’m kind of like a little bit different from the people on this panel. And that too, I guess. That I’m kind of, my main kind of threat vector about, my main worries about future are less about like how AI is being deployed and diffused and taken into practice. And I’m way more worried about what is happening in the labs, in the top AI companies. I’m not sure what the future is going to look like. because they are now in a cutthroat race to build something that is smarter than they are. They are in a cutthroat race to build superintelligence. And, like, I mean, we just saw yesterday the picture where, with a photo of it, Narendra Modi, Dario Amadei, and Sam Altman refused to link hands.

I mean, this is, like, indicative. We also saw both Dario and Demis Hassabis call for a slowdown in Davos last month. They just can’t do it alone. And I think there are, like, two reasons why it’s, like, an unfortunate situation. One is that the U.S. as a country is conflicted. They basically rely on AI for their economic and competitive power. So they are, like, very hesitant to, kind of, meddle with now. cutthroat situation in AI companies and the rest of the world really doesn’t understand how big danger they are now. So it’s part of the reason why we did the superintelligence statement is to create awareness that there is increasing political demand to do something about this situation.

We now have more than 130 ,000 signatures which is like many times more than we had done our original six months post letter had in 2023. So yeah, that’s like if there was enough pressure, I think clearly like the rest of the world is still kind of more powerful than the kind of leading AI countries. There are more people, there’s more economic power, etc. So if there was like enough pressure this could be solved. Like the way I put it is that it’s super hard to do like a $10 billion project. it’s impossible to do it if it’s illegal. So having these trillions flow into AI actually makes it easier to govern than harder.

Eileen Donahoe

So I’m tempted to follow up with a question about investors and their potential role in this. They are obviously playing a decisive role in shaping the incentives, but they’re largely absent from the governance conversation. So what would it take to bring investors meaningfully into the safety conversation?

Jann Tallinn

So, yeah, I think the answer is kind of simple. I don’t think investors play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them. They will now IPO soon. And if you are like an IPO market, there is… like, like, so level playing field, which means that like, if somebody’s not funding, somebody else will. So I don’t think investors, investors could have affected things, but like, five, 10 years ago.

Eileen Donahoe

Great. Okay, so since we’re running short on time, I’m going to ask one question, and ask you all to answer it, which is about the 12 month window. Oh, the very shortly, each shortly. Many in the AI safety community believe we have a narrow window, perhaps 12 to 24 months before frontier AI capabilities advance beyond our ability to evaluate and govern them. So what would you recommend is prioritized between now and we’re basically in the next year to two years, each of you to enhance safety? and security?

Josephine Teo

I think there are two, really. I think the AI safety research priorities need to be refreshed because the field has moved so quickly. The Singapore consensus identified a set, but as soon as they are published, we recognize that they will be out of date. So we need to refresh it. That’s why we’re going to have the second edition, you know, worked on. Hopefully in a few months. The second thing I think is that we can’t just keep thinking about frameworks, you know, and guidelines. At some point, we need to be able to introduce better testing tools. And until we are able to do so, the companies that are developing and deploying AI models, they also don’t have a very practical way of giving assurance.

So I’d like to see in the next 12 months some further advancements. In those two areas.

Mathias Cormann

I’ll be really quick I know there’s always a temptation in these sorts of conversations, what is the one thing that can sort of fix it all and the truth is there’s not one thing we’ve got to go as fast as we can to play catch up to a degree but we’ve also got to go as comprehensive and as deep as we can there’s just no alternative, there’s catch up to be played, we’ve got to put a real effort and it’s got to be right across the board and I don’t think that you can just say there’s the one thing that will make us all safe and it’s going to be okay.

Eileen Donahoe

Minister Gobind?

Gobind Singh Deo

I think as I said earlier, we need to start thinking how we can build structures and perhaps institutionalize this entire conversation about building security around AI and its governance in this regard, we have to understand that things are going to move very quickly and you’re going to see new technology develop very fast which brings new risks as well, so in that regard, you’ve got to build something that’s sustainable and I think in order to do that, institutionalizing it should be a priority.

Sangbu Kim

everyone is really rushing for ai system development ai solution development that means ai is currently ai safety measures currently under invested so i really like to urge all of us to think about this is not free you know things we need to spend some money to protect the system in advance from the scratch when you design the system so that means we should allocate some money to fully invest in in the

Eileen Donahoe

Jann Tallinn?

Jann Tallinn

so slow down we really need to slow down that the companies are asking for it and if we like instrumental to that would be basically transparency like more people should know what the leaders of ai companies know in order to basically understand how crucial the slowdown now is

Eileen Donahoe

okay great well i believe we have a little bit of a close coming and thank you all so much i wish we had had a day to talk about all of these issues. But thank you so much. Thank you very much.

Nicolas Miailhe

Thank you very much, Eileen, and this fantastic panel, excellencies, colleagues, friends. What we’ve heard today confirms something important. The coordination gap frontier in AI safety is real, and it is urgent. And as we’ve discussed today, it is closable. And before I hand over the floor to Osama Manzara to close off for a few minutes of remarks and reflection, I’d like to invite you all to the United Nations General Assembly next edition in New York, where we hope to organize the fourth edition of AI Safety Connect, and hopefully with many of the great policymakers and leaders we have heard from today, to carry forward that collective effort. Osama, the floor is yours.

Osama Manzar

Well, thank you very much. And we are one of those absentee co -organizer in this one. So, you know, because being a local, but I just want to I mean, apart from thanking each one of you who didn’t get up and, you know, go out of the room. And every one of you who gave all the safety remarks before usage of AI on behalf of 40 million people that we have reached out in the last 23 years. And billions of the other people whom we are going to work for. I want to suggest that the entire safety aspect of AI should be more from please save people from AI. Right. Because that’s the safety like it’s a car on the road.

You know, we have to save people before you teach people how to think. So we also have to keep a very, very strong thing. How do we save human intelligence from artificial intelligence? And how do we inbuilt in the safety guards and all the ethics and all the all the, you know, policy playbooks? Thank you very much. Thank you. Bye. Thank you. Thank you.

S

Stuart Russell

Speech speed

119 words per minute

Speech length

250 words

Speech time

125 seconds

Cross‑border harms demand coordinated governance

Explanation

Russell emphasizes that AI‑related harms such as psychological damage or loss of human control cross national borders, making global coordination essential to prevent or mitigate them.


Evidence

“And I think it’s one on which global coordination is essential because the harms, whether it’s psychological damage to the next generation or loss of human control altogether, those harms cross borders.” [1]. “And we must coordinate to make sure that they don’t happen or they don’t originate anywhere.” [9].


Major discussion point

Global coordination & governance for AI safety


Topics

Artificial intelligence


ICI as a global democratic scientific society

Explanation

Russell describes the International Association for Safe and Ethical AI (ICI) as a worldwide, democratic, scientific and professional society whose mission is to ensure AI systems operate safely and ethically for humanity.


Evidence

“So as Nico mentioned, the International Association for Safe and Ethical AI, or ICI, the world’s worst acronym, is a global, democratic, scientific and professional society.” [29]. “Our mission is to ensure that AI systems operate safely and ethically for the benefit of humanity.” [30].


Major discussion point

Purpose and activities of AI Safety Connect / ICI


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


N

Nicolas Miailhe

Speech speed

149 words per minute

Speech length

812 words

Speech time

325 seconds

AI Safety Connect convenings and rapid safety dialogue

Explanation

Miailhe explains that AI Safety Connect was founded to fill the coordination gap by holding frequent global convenings, including AI summits and UN sessions, to showcase governance tools and foster collaboration.


Evidence

“The coordination gap frontier in AI safety is real, and it is urgent.” [5]. “We convene at each AI summit.” [24]. “We need a faster tempo for these safety discussions, so every six months we have this global convening.” [25]. “AI Safety Connect, has been connected to showcase Concrete’s governance coordination mechanisms, tools, and solutions.” [19].


Major discussion point

Global coordination & governance for AI safety


Topics

Artificial intelligence | Capacity development


E

Eileen Donahoe

Speech speed

122 words per minute

Speech length

1101 words

Speech time

539 seconds

Panel aims to identify coordination gaps

Explanation

Donahoe states that the panel’s purpose is to pinpoint present‑day coordination gaps in global AI practice and market, guiding policy action.


Evidence

“This panel will aim to identify present -day coordination gaps in the global AI practice and the global market.” [36].


Major discussion point

Global coordination & governance for AI safety


Topics

Artificial intelligence | The enabling environment for digital development


Middle powers can leverage pooled resources and normative influence

Explanation

Donahoe argues that middle powers, through pooled resources, market leverage, normative influence, and regulatory innovation, can shape global AI practices and safety.


Evidence

“Through pooled resources, market leverage, normative influence, and regulatory innovation, they can shape the direction of global AI practices and safeties.” [47].


Major discussion point

Role of middle powers & global majority states


Topics

Artificial intelligence | Financial mechanisms | Capacity development


Narrow 12‑24‑month window before capabilities outpace governance

Explanation

Donahoe highlights the community’s belief that there is only a 12‑24‑month period before frontier AI capabilities exceed our ability to evaluate and govern them, urging immediate action.


Evidence

“Many in the AI safety community believe we have a narrow window, perhaps 12 to 24 months before frontier AI capabilities advance beyond our ability to evaluate and govern them.” [88].


Major discussion point

Urgency and the narrow 12‑24‑month window


Topics

Artificial intelligence | Capacity development


M

Mathias Cormann

Speech speed

145 words per minute

Speech length

864 words

Speech time

356 seconds

OECD incident‑reporting framework and open‑source safety tools

Explanation

Cormann points to the OECD’s launch of an open‑call for open‑source safety and evaluation tools and the GPI AI Common Framework for Incident Reporting as foundational infrastructure for AI safety.


Evidence

“With global partners, the OECD recently launched an open call for open source safety and evaluation tools hosted in the OECD .ai catalog of tools and metrics to make a trustworthy AI easier to implement in practice.” [35]. “The GPI AI Common Framework for Incident Reporting aims to help us collectively learn from mistakes before they scale globally, and over time, this could evolve into an international AI Incident Response Center, coordinating alerts between governments and labs without exposing companies to commercial or legal penalties for reporting in good flight.” [41].


Major discussion point

Infrastructure and tools for AI safety


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


No single fix; need comprehensive, rapid catch‑up

Explanation

Cormann stresses that there is no single solution; a fast, comprehensive effort across all domains is required to catch up with AI development.


Evidence

“I mean, these are some initiatives to form the foundation of a more transparent, data‑driven, and interoperable AI governance ecosystem, and” [85]. “I don’t think that you can just say there’s the one thing that will make us all safe and it’s going to be okay. … there’s just no alternative, there’s catch up to be played, we’ve got to put a real effort and it’s got to be right across the board…” [114].


Major discussion point

Urgency and the narrow 12‑24‑month window


Topics

Artificial intelligence | The enabling environment for digital development


J

Josephine Teo

Speech speed

143 words per minute

Speech length

889 words

Speech time

371 seconds

Translating science into policy and building testing standards

Explanation

Teo emphasizes the need for policymakers to translate scientific knowledge into policy, develop interoperable standards, and introduce better testing tools for AI safety.


Evidence

“One of the most important things I think as policymakers is for us to think about what it takes to translate what we know from science into policy.” [57]. “At some point, we need to be able to introduce better testing tools.” [56]. “You need to invest in understanding the tests.” [60].


Major discussion point

Role of middle powers & global majority states


Topics

Artificial intelligence | Capacity development


Refresh AI safety research priorities within 12 months

Explanation

Teo calls for updating AI safety research priorities and expects further advancements within the next year.


Evidence

“I think the AI safety research priorities need to be refreshed because the field has moved so quickly.” [26]. “So I’d like to see in the next 12 months some further advancements.” [86].


Major discussion point

Infrastructure and tools for AI safety


Topics

Artificial intelligence | Capacity development


G

Gobind Singh Deo

Speech speed

174 words per minute

Speech length

535 words

Speech time

183 seconds

ASEAN AI Safety Network and enforcement mechanisms

Explanation

Deo highlights Malaysia’s leadership in placing AI at the ASEAN agenda via the ASEAN AI Safety Network and stresses the need for policies and institutions to enforce safety across the region.


Evidence

“under your leadership and Malaysia’s 2025 ASEAN chairmanship, Malaysia succeeded in placing AI at the center of ASEAN’s agenda by establishing the ASEAN AI Safety Network.” [64]. “Now the third part which is really important is also ensuring that whilst this goes on, you create those policies, you have institutions that enforce and the discussions persist at an ASEAN level.” [67].


Major discussion point

Role of middle powers & global majority states


Topics

Artificial intelligence | Capacity development | The enabling environment for digital development


Institutionalizing AI safety governance for sustainability

Explanation

Deo argues that building structures and institutionalizing AI safety conversations is essential to keep pace with rapid technology changes.


Evidence

“institutionalizing it should be a priority.” [119]. “We need to start thinking how we can build structures and perhaps institutionalize this entire conversation about building security around AI and its governance…” [119].


Major discussion point

Urgency and the narrow 12‑24‑month window


Topics

Artificial intelligence | The enabling environment for digital development


S

Sangbu Kim

Speech speed

112 words per minute

Speech length

525 words

Speech time

280 seconds

Allocate dedicated funding for safety architecture from design stage

Explanation

Kim stresses that safety architecture must be built into AI systems from the start and that money must be allocated to fund these measures.


Evidence

“When they design the AI systems, definitely they need to design the safety architecture within the system.” [34]. “so we should allocate some money to fully invest in … when you design the system” [90].


Major discussion point

Urgency and the narrow 12‑24‑month window


Topics

Artificial intelligence | Financial mechanisms


World Bank partnership to help Global South adopt safety‑by‑design

Explanation

Kim notes that collaboration with the World Bank is a way for developing countries to learn from advanced technology and embed safety‑by‑design.


Evidence

“So in order to solve this type of ironical situation from the developing world point of view and from the World Bank point of view, this is the only way to very closely work and collaborate and learn from the advanced technology and advanced company and advanced country.” [71].


Major discussion point

Role of middle powers & global majority states


Topics

Financial mechanisms | Capacity development


J

Jann Tallinn

Speech speed

143 words per minute

Speech length

517 words

Speech time

216 seconds

Private investors have limited influence as AI firms approach IPOs

Explanation

Tallinn observes that leading AI companies are now beyond the reach of private investors, reducing investors’ ability to affect safety decisions.


Evidence

“I don’t think investors play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them.” [95]. “They will now IPO soon.” [97].


Major discussion point

Investment, incentives, and the role of investors


Topics

Financial mechanisms | Artificial intelligence


Massive capital flows can be leveraged to pressure AI safety

Explanation

Tallinn suggests that if enough pressure is applied, the huge capital flowing into AI could be used to enforce safety standards.


Evidence

“So if there was like enough pressure this could be solved.” [106]. “So having these trillions flow into AI actually makes it easier to govern than harder.” [104].


Major discussion point

Investment, incentives, and the role of investors


Topics

Financial mechanisms | Artificial intelligence


Call for slowdown and greater transparency to enable effective prohibition

Explanation

Tallinn urges a slowdown of AI development and increased transparency so that broader society can understand AI leaders’ knowledge and support potential prohibitions.


Evidence

“slow down we really need to slow down … transparency … more people should know what the leaders of ai companies know in order to basically understand how crucial the slowdown now is” [101].


Major discussion point

Urgency and the narrow 12‑24‑month window


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


O

Osama Manzar

Speech speed

72 words per minute

Speech length

193 words

Speech time

159 seconds

Ethical framing: protect people like road safety

Explanation

Manzar frames AI safety as a matter of protecting people, comparing it to car safety and emphasizing the need to embed ethics and safeguards from the start.


Evidence

“I want to suggest that the entire safety aspect of AI should be more from please save people from AI.” [31]. “Because that’s the safety like it’s a car on the road.” [80].


Major discussion point

Ethical framing: protecting humanity from AI


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


Agreements

Agreement points

Need for international coordination and collaboration on AI safety

Speakers

– Nicolas Miailhe
– Mathias Cormann
– Stuart Russell
– Josephine Teo
– Gobind Singh Deo

Arguments

AI Safety Connect convenes global stakeholders every six months to accelerate safety discussions and build consensus on frontier AI safety


Trust is built through inclusion and objective evidence, requiring collaboration between governments, companies, civil society, and technical experts


Global coordination is essential because AI harms cross borders and affect everyone


Middle powers like Singapore can bridge coordination gaps between AI superpowers and maintain scientific and safety channels


Malaysia’s dual approach of building national capacity while leading regional coordination through ASEAN AI Safety Network serves as a model for other middle powers


Summary

All speakers agree that AI safety requires coordinated international efforts, with different actors playing complementary roles in building consensus and addressing cross-border risks


Topics

Artificial intelligence | The enabling environment for digital development


Urgency of addressing AI safety before technology outpaces governance

Speakers

– Nicolas Miailhe
– Eileen Donahoe
– Jann Tallinn
– Josephine Teo

Arguments

AI Safety Connect was founded to help shape frontier AI safety through global convening and engagement


There is a narrow 12-24 month window before frontier AI capabilities advance beyond our ability to evaluate and govern them


Leading AI companies are in a cutthroat race to build superintelligence, which poses significant risks


AI safety research priorities need constant refreshing due to rapid technological advancement


Summary

Multiple speakers emphasize the time-sensitive nature of establishing AI governance frameworks before technological capabilities exceed our ability to control them


Topics

Artificial intelligence | The enabling environment for digital development


Need for practical implementation tools and institutional capacity

Speakers

– Mathias Cormann
– Josephine Teo
– Gobind Singh Deo
– Sangbu Kim

Arguments

Coordinated transparency and incident reporting systems are the most critical frontier AI safety infrastructure needed


AI safety research priorities need constant refreshing due to rapid technological advancement


Countries need institutions with enforcement capabilities, not just standards and regulations on paper


Investment in AI safety measures is currently insufficient and requires dedicated funding allocation


Summary

Speakers agree that moving beyond frameworks and guidelines to practical tools, enforcement mechanisms, and adequate funding is essential for effective AI governance


Topics

Artificial intelligence | Capacity development | Financial mechanisms


Importance of evidence-based policymaking for AI safety

Speakers

– Mathias Cormann
– Josephine Teo
– Sangbu Kim

Arguments

Trust is built through inclusion and objective evidence, requiring collaboration between governments, companies, civil society, and technical experts


Translating scientific knowledge into effective policy requires extensive testing, simulations, and understanding of real-world conditions


The World Bank helps developing countries prepare for AI deployment by connecting them with advanced economies and companies for safety practices


Summary

Speakers emphasize that AI safety policies must be grounded in scientific evidence, rigorous testing, and practical experience rather than speculation


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Similar viewpoints

Both speakers emphasize the role of international organizations and middle powers in creating frameworks and bridging gaps between different stakeholders in AI governance

Speakers

– Mathias Cormann
– Josephine Teo

Arguments

The OECD has developed globally recognized AI principles adopted by 50 countries and created frameworks for international consistency


Middle powers like Singapore can bridge coordination gaps between AI superpowers and maintain scientific and safety channels


Topics

Artificial intelligence | The enabling environment for digital development


Both speakers focus on the need for institutional capacity building and preparation for emerging AI risks, particularly in developing countries

Speakers

– Gobind Singh Deo
– Sangbu Kim

Arguments

Building sustainable institutions is necessary to handle rapidly evolving AI technology and emerging risks


The World Bank helps developing countries prepare for AI deployment by connecting them with advanced economies and companies for safety practices


Topics

Artificial intelligence | Capacity development | The enabling environment for digital development


Both speakers emphasize the urgent need for transparency and immediate action to address AI risks before capabilities exceed governance mechanisms

Speakers

– Jann Tallinn
– Eileen Donahoe

Arguments

Transparency is crucial so more people understand what AI company leaders know about the risks


There is a narrow 12-24 month window before frontier AI capabilities advance beyond our ability to evaluate and govern them


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Unexpected consensus

Need to slow down AI development

Speakers

– Mathias Cormann
– Jann Tallinn

Arguments

Trust is built through inclusion and objective evidence, requiring collaboration between governments, companies, civil society, and technical experts


Leading AI companies are in a cutthroat race to build superintelligence, which poses significant risks


Explanation

It’s unexpected to see consensus between an OECD Secretary General (representing institutional governance) and an AI investor/engineer on the need to occasionally pause and slow down AI development, showing alignment across different stakeholder perspectives


Topics

Artificial intelligence | The enabling environment for digital development


Focus on protecting people from AI rather than just technical safeguards

Speakers

– Osama Manzar
– Stuart Russell

Arguments

AI safety should focus on protecting people from AI rather than just technical safeguards


AI safety requires both technical solutions for building safe systems and governance mechanisms to ensure only safe systems get built


Explanation

The consensus between a grassroots digital development advocate and a leading AI safety researcher on prioritizing human-centered approaches over purely technical solutions represents an unexpected alignment between different perspectives on AI safety


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Overall assessment

Summary

The speakers demonstrate strong consensus on the urgency of AI safety coordination, the need for international collaboration, and the importance of moving from frameworks to practical implementation. There is broad agreement on the time-sensitive nature of establishing governance before AI capabilities exceed control mechanisms.


Consensus level

High level of consensus across diverse stakeholders (government officials, international organization leaders, investors, and civil society) indicates strong foundation for coordinated action on AI safety governance, though implementation challenges remain significant given the rapid pace of technological development.


Differences

Different viewpoints

Focus of AI safety priorities – technical governance vs. development race concerns

Speakers

– Jann Tallinn
– Mathias Cormann
– Josephine Teo
– Gobind Singh Deo

Arguments

Leading AI companies are in a cutthroat race to build superintelligence, which poses significant risks


Trust is built through inclusion and objective evidence, requiring collaboration between governments, companies, civil society, and technical experts


AI safety research priorities need constant refreshing due to rapid technological advancement


Countries need institutions with enforcement capabilities, not just standards and regulations on paper


Summary

Tallinn focuses on the existential risks from the competitive race to superintelligence and calls for slowing down development, while other speakers emphasize building collaborative governance frameworks, updating research priorities, and strengthening institutional capacity for implementation


Topics

Artificial intelligence | The enabling environment for digital development


Approach to AI safety implementation – comprehensive vs. targeted interventions

Speakers

– Mathias Cormann
– Josephine Teo
– Sangbu Kim

Arguments

I’ll be really quick I know there’s always a temptation in these sorts of conversations, what is the one thing that can sort of fix it all and the truth is there’s not one thing we’ve got to go as fast as we can to play catch up to a degree but we’ve also got to go as comprehensive and as deep as we can


AI safety research priorities need constant refreshing due to rapid technological advancement


Investment in AI safety measures is currently insufficient and requires dedicated funding allocation


Summary

Cormann advocates for a comprehensive approach across all areas without focusing on single solutions, while Teo emphasizes specific priorities like refreshing research priorities and developing testing tools, and Kim focuses specifically on funding allocation for safety measures


Topics

Artificial intelligence | Financial mechanisms | Capacity development


Role of private investors in AI governance

Speakers

– Jann Tallinn
– Eileen Donahoe

Arguments

So, yeah, I think the answer is kind of simple. I don’t think investors play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them


They are obviously playing a decisive role in shaping the incentives, but they’re largely absent from the governance conversation


Summary

Tallinn argues that private investors no longer have meaningful influence over leading AI companies due to their scale and upcoming IPOs, while Donahoe suggests investors are playing a decisive role in shaping incentives but are absent from governance discussions


Topics

Artificial intelligence | Financial mechanisms | The enabling environment for digital development


Unexpected differences

Effectiveness of international frameworks and standards

Speakers

– Mathias Cormann
– Gobind Singh Deo

Arguments

The OECD has developed globally recognized AI principles adopted by 50 countries and created frameworks for international consistency


Countries need institutions with enforcement capabilities, not just standards and regulations on paper


Explanation

While both speakers work in international governance, Cormann emphasizes the success of existing international frameworks like OECD principles, while Gobind directly challenges the effectiveness of standards without enforcement mechanisms, suggesting a fundamental disagreement about whether current international approaches are sufficient


Topics

Artificial intelligence | The enabling environment for digital development


Timeline and urgency of AI safety measures

Speakers

– Eileen Donahoe
– Jann Tallinn
– Josephine Teo

Arguments

There is a narrow 12-24 month window before frontier AI capabilities advance beyond our ability to evaluate and govern them


Leading AI companies are in a cutthroat race to build superintelligence, which poses significant risks


AI safety research priorities need constant refreshing due to rapid technological advancement


Explanation

Despite all acknowledging the rapid pace of AI development, there’s unexpected disagreement about the specific timeline and urgency – Donahoe frames it as a 12-24 month window for action, Tallinn emphasizes immediate need to slow down the race, while Teo focuses on continuous adaptation rather than crisis timeline


Topics

Artificial intelligence | Monitoring and measurement


Overall assessment

Summary

The main areas of disagreement center on the appropriate balance between slowing AI development versus accelerating governance mechanisms, the effectiveness of comprehensive versus targeted interventions, and the role of different stakeholders in shaping AI safety outcomes


Disagreement level

Moderate disagreement with significant implications – while speakers share common concerns about AI safety, their different approaches could lead to fragmented or conflicting policy responses. The disagreement between development slowdown advocates and governance acceleration proponents represents a fundamental tension that could impact the effectiveness of international coordination efforts


Partial agreements

Partial agreements

All speakers agree on the need for practical, implementable AI safety measures, but disagree on whether to prioritize transparency/reporting systems, evidence-based policy translation, institutional enforcement capacity, or international capacity building partnerships

Speakers

– Mathias Cormann
– Josephine Teo
– Gobind Singh Deo
– Sangbu Kim

Arguments

Coordinated transparency and incident reporting systems are the most critical frontier AI safety infrastructure needed


Translating scientific knowledge into effective policy requires extensive testing, simulations, and understanding of real-world conditions


Countries need institutions with enforcement capabilities, not just standards and regulations on paper


The World Bank helps developing countries prepare for AI deployment by connecting them with advanced economies and companies for safety practices


Topics

Artificial intelligence | Capacity development | The enabling environment for digital development


All agree on the importance of transparency and global coordination for AI safety, but disagree on the urgency and methods – Tallinn emphasizes immediate transparency to support slowdown, Cormann focuses on building trust through inclusive collaboration, and Russell emphasizes the cross-border nature requiring coordination

Speakers

– Jann Tallinn
– Mathias Cormann
– Stuart Russell

Arguments

Transparency is crucial so more people understand what AI company leaders know about the risks


Trust is built through inclusion and objective evidence, requiring collaboration between governments, companies, civil society, and technical experts


Global coordination is essential because AI harms cross borders and affect everyone


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Similar viewpoints

Both speakers emphasize the role of international organizations and middle powers in creating frameworks and bridging gaps between different stakeholders in AI governance

Speakers

– Mathias Cormann
– Josephine Teo

Arguments

The OECD has developed globally recognized AI principles adopted by 50 countries and created frameworks for international consistency


Middle powers like Singapore can bridge coordination gaps between AI superpowers and maintain scientific and safety channels


Topics

Artificial intelligence | The enabling environment for digital development


Both speakers focus on the need for institutional capacity building and preparation for emerging AI risks, particularly in developing countries

Speakers

– Gobind Singh Deo
– Sangbu Kim

Arguments

Building sustainable institutions is necessary to handle rapidly evolving AI technology and emerging risks


The World Bank helps developing countries prepare for AI deployment by connecting them with advanced economies and companies for safety practices


Topics

Artificial intelligence | Capacity development | The enabling environment for digital development


Both speakers emphasize the urgent need for transparency and immediate action to address AI risks before capabilities exceed governance mechanisms

Speakers

– Jann Tallinn
– Eileen Donahoe

Arguments

Transparency is crucial so more people understand what AI company leaders know about the risks


There is a narrow 12-24 month window before frontier AI capabilities advance beyond our ability to evaluate and govern them


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Takeaways

Key takeaways

There is an urgent coordination gap in frontier AI safety that requires immediate global action within a 12-24 month window before AI capabilities advance beyond governance capacity


Trust and effective AI governance are built through inclusive collaboration between governments, companies, civil society, and technical experts based on objective evidence


Middle powers and Global South countries can play active roles in shaping AI safety through pooled resources, market leverage, and regulatory innovation rather than being passive recipients


The most critical infrastructure needed is coordinated transparency and incident reporting systems, along with practical safety tools and evaluation mechanisms


Leading AI companies are in a dangerous race to build superintelligence that requires external intervention and potential slowdown measures


AI safety requires both technical solutions for building safe systems and governance mechanisms to ensure only safe systems are deployed


International frameworks like OECD principles and ASEAN AI Safety Network provide foundations, but implementation requires sustained political will and enforcement capabilities


Investment in AI safety measures is currently insufficient and needs dedicated funding allocation from the design phase


Resolutions and action items

AI Safety Connect will organize the fourth edition at the UN General Assembly in New York to continue coordination efforts


Singapore will work on refreshing AI safety research priorities in the coming months for a second edition


ASEAN must take concrete steps in the next 12-18 months to operationalize the AI Safety Network beyond aspirational goals


Countries should institutionalize AI safety conversations to build sustainable structures that can adapt to rapidly evolving technology


The OECD will continue developing the incident reporting framework and evolving it toward an international AI Incident Response Center


Developing countries should closely collaborate with advanced economies and companies to learn safety practices and stay current with emerging threats


Unresolved issues

How to effectively implement a prohibition on superintelligent development until safety consensus and public buy-in are achieved


How to bring investors meaningfully into the safety conversation when leading AI companies are beyond private investor influence


How to balance the need for AI innovation and economic competitiveness with necessary safety slowdowns


How to ensure enforcement capabilities exist for AI safety standards and regulations beyond just having policies on paper


How to preserve human intelligence and protect people from AI while enabling beneficial AI development


How to achieve the necessary transparency from AI companies about their capabilities and risks


How to allocate sufficient resources for AI safety research and implementation across different jurisdictions


Suggested compromises

Occasional pausing and slowing down of AI development to test, monitor, audit, and build confidence in systems while balancing innovation needs


Building coordinated incident reporting systems that don’t expose companies to commercial or legal penalties for good faith reporting


Developing interoperable international standards that reduce fragmentation while allowing for national priority variations


Creating a comprehensive approach that addresses multiple aspects of AI safety rather than seeking single solutions


Establishing regular six-month global convenings to maintain faster tempo for safety discussions than traditional policy cycles


Thought provoking comments

Markets reward the private sector for speed, scale, and innovation. While governments must manage risk and protect the public interest without stifling progress. But a challenge… is that AI is moving much faster than policy cycles have traditionally moved, which easily then creates gaps between innovation and progress and opportunity, but necessary oversight, mitigation and management of risk.

Speaker

Mathias Cormann


Reason

This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incentives driving rapid AI development and the slower, more deliberative pace of policy-making. It frames the core challenge not as a technical problem but as a structural mismatch between different institutional rhythms.


Impact

This observation set the analytical framework for much of the subsequent discussion. It influenced Minister Teo’s detailed aviation safety analogy and Minister Gobind’s emphasis on building sustainable institutions. The comment shifted the conversation from abstract safety principles to concrete governance challenges.


In areas where safety is the objective, we can’t just go with gut. We can’t just go with speculation… between science to policy, you are actually going to need a lot of time. You need to invest in understanding the tests. You need to invest in understanding whether the distances that you decide are safe works well in a thunderstorm, a tropical thunderstorm.

Speaker

Josephine Teo


Reason

This aviation safety analogy was particularly insightful because it grounded the abstract challenge of AI safety in a concrete, well-understood domain. It illustrated how safety standards require extensive testing, validation across different conditions, and international coordination – all of which take significant time and resources.


Impact

This analogy provided a tangible framework that other participants could relate to, moving the discussion from theoretical concerns to practical implementation challenges. It reinforced the need for investment in testing infrastructure and international standards, themes that other speakers built upon.


I’m kind of like a little bit different from the people on this panel… I’m way more worried about what is happening in the labs, in the top AI companies… they are now in a cutthroat race to build something that is smarter than they are. They are in a cutthroat race to build superintelligence.

Speaker

Jann Tallinn


Reason

This comment introduced a fundamentally different perspective on AI risk, shifting focus from deployment and governance issues to the existential risks emerging from the development process itself. It challenged the panel’s emphasis on regulatory frameworks by suggesting the core problem lies in the competitive dynamics driving unsafe development practices.


Impact

Tallinn’s intervention created a notable shift in the discussion’s tone and scope. It moved the conversation from incremental safety measures to questions of whether AI development should continue at all. This perspective forced other participants to grapple with more fundamental questions about the pace and direction of AI progress.


We also saw both Dario and Demis Hassabis call for a slowdown in Davos last month. They just can’t do it alone… if there was enough pressure, I think clearly like the rest of the world is still kind of more powerful than the kind of leading AI countries.

Speaker

Jann Tallinn


Reason

This observation was particularly striking because it revealed that even AI company leaders recognize the need for external intervention to slow development. It reframed the power dynamics, suggesting that coordinated international pressure could be more effective than assumed, even against powerful tech companies.


Impact

This comment challenged assumptions about the inevitability of rapid AI development and highlighted the potential agency of middle powers and international coordination. It provided a concrete pathway for the kind of international cooperation other panelists had been discussing in more abstract terms.


I want to suggest that the entire safety aspect of AI should be more from please save people from AI… How do we save human intelligence from artificial intelligence?

Speaker

Osama Manzar


Reason

Despite being brief, this closing comment reframed the entire safety discussion by inverting the typical framing. Instead of ‘AI safety’ (making AI safe), he emphasized ‘safety from AI’ (protecting humans from AI’s impacts). This shift in perspective highlighted the human-centric concerns that may be getting lost in technical discussions.


Impact

As the final substantive comment, this provided a provocative reframing that challenged participants to consider whether their approaches adequately prioritized human welfare over technological advancement. It served as a powerful closing note that questioned the fundamental assumptions underlying much of the preceding discussion.


Overall assessment

These key comments shaped the discussion by progressively deepening and challenging the conversation’s assumptions. Cormann’s structural analysis provided the foundation, Teo’s practical analogy grounded abstract concepts in reality, Tallinn’s existential perspective introduced urgency and questioned fundamental premises, and Manzar’s human-centric reframing challenged the entire approach. Together, these interventions moved the discussion from procedural coordination questions to fundamental questions about power, pace, and priorities in AI development. The comments created a productive tension between incremental governance approaches and more radical interventions, ultimately enriching the conversation by forcing participants to grapple with both immediate practical challenges and longer-term existential questions.


Follow-up questions

How to effectively implement an international AI incident response center

Speaker

Eileen Donahoe


Explanation

This was posed as a key question about whether an international incident response center should be a priority and if it’s achievable, representing a critical coordination mechanism for AI safety


How to translate scientific knowledge into effective AI safety policy

Speaker

Josephine Teo


Explanation

Minister Teo emphasized the gap between understanding the science and creating practical policies, highlighting the need for extensive testing, simulations, and understanding of real-world conditions


How to develop interoperable international AI safety standards

Speaker

Josephine Teo


Explanation

She noted the importance of creating standards that work across different countries and conditions, similar to aviation safety standards


How to build sustainable institutions for AI governance that persist beyond current governments

Speaker

Gobind Singh Deo


Explanation

Minister Gobind emphasized the need to institutionalize AI safety conversations to ensure continuity and effectiveness across political changes


How to prepare developing countries for next-generation AI risks they cannot yet anticipate

Speaker

Sangbu Kim


Explanation

He highlighted the challenge that low-capacity countries face in preparing for unknown future AI threats and the need for better knowledge transfer from advanced economies


What would an effective prohibition on superintelligent AI development look like in practice

Speaker

Eileen Donahoe


Explanation

This addresses the practical implementation challenges of slowing down or stopping AI development until safety conditions are met


How to bring investors meaningfully into the AI safety conversation

Speaker

Eileen Donahoe


Explanation

She noted that investors play a decisive role in shaping incentives but are largely absent from governance discussions


Need to refresh AI safety research priorities due to rapid field advancement

Speaker

Josephine Teo


Explanation

She noted that the Singapore consensus on research priorities becomes outdated quickly and needs regular updating


How to develop better practical testing tools for AI safety

Speaker

Josephine Teo


Explanation

She emphasized that companies need practical ways to provide safety assurance beyond just frameworks and guidelines


How to protect human intelligence from artificial intelligence

Speaker

Osama Manzar


Explanation

He suggested shifting focus from protecting AI to protecting people from AI, emphasizing the need to safeguard human intelligence and capabilities


How to achieve transparency so more people understand what AI company leaders know about risks

Speaker

Jann Tallinn


Explanation

He argued that broader understanding of the risks known to AI leaders is crucial for building support for necessary slowdowns


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.