Policymaker’s Guide to International AI Safety Coordination
20 Feb 2026 17:00h - 18:00h
Policymaker’s Guide to International AI Safety Coordination
Summary
Nicolas Miailhe opened by noting that the AI race has moved from theory to massive investment, with billions-trillions being poured into development while safety research lags behind [1-3]. He explained that AI Safety Connect was created to mobilise global-majority engagement, convene semi-annual summits at AI conferences and the UN, and run capacity-building and closed-door trust exercises [6-8][11-15].
Stuart Russell described the International Association for Safe and Ethical AI as a worldwide scientific society of thousands of members that aims to ensure AI systems operate safely and ethically, and he highlighted that achieving this requires both technical solutions and coordinated governance [33-38][40-44]. He stressed that AI-related harms cross borders, making global coordination essential, and pointed to India’s summit as an example of inclusive international dialogue [44-46].
Eileen Donahoe framed the panel by stating that rapid AI progress is outpacing minimal guardrails, creating a fragmented, non-binding governance landscape, and argued that middle-power and majority states can leverage pooled resources and normative influence to shape global AI safety [56-61][62-66]. She added that the panel would identify present coordination gaps and propose practical steps for policymakers in the coming months [66-68].
Mathias Cormann of the OECD identified inclusion of all stakeholders and evidence-based trust as key lessons, and warned that policy cycles are too slow for the pace of AI innovation, urging occasional pauses for testing and auditing [77-84][85-88]. He argued that the most critical frontier-AI safety infrastructure is coordinated transparency and incident reporting, citing the Hiroshima Code of Conduct and the emerging Global Partnership on AI incident-reporting framework as steps toward an international response centre [91-96].
Singapore’s Minister Josephine Teo noted that smaller states depend on foreign AI technologies, so translating scientific knowledge into effective policy requires rigorous testing, standards and international collaboration through bodies such as the OECD, AI Safety Connect and ICI [103-110][111-119][140-144]. Malaysia’s Gobind Singh Deo highlighted the ASEAN AI Safety Network and a forthcoming AI Governance Bill, emphasizing that without enforceable agencies, standards and regulations remain ineffective and must be institutionalised across the region [152-158][162-166][167-173].
World Bank Vice-President Sangbu Kim said the Bank can help Global South countries embed safety architecture from the design stage by partnering with advanced economies and firms to share red-team practices and build capacity [178-184][185-200]. Jann Tallinn warned that the most pressing risk is the unchecked race for superintelligence, calling for a slowdown supported by transparency and noting that private investors now have little influence over the leading AI firms [210-218][221-227][231-235].
Nicolas concluded that the coordination gap in frontier AI safety is real and urgent but can be closed, inviting participants to the next AI Safety Connect at the UN General Assembly to continue collective action [260-264].
Keypoints
Major discussion points
– Rapid AI progress outpaces safety and policy, demanding urgent global coordination.
Nicolas opens by noting the “race towards artificial intelligence is no longer a theoretical pursuit” and that “safety is not keeping pace” [1-4]. Stuart Russell stresses that AI harms “cross borders” and require coordinated governance [44-46]. Eileen Donahoe describes a “fragmented…risk-management landscape” that fails to shape incentives [57-59]. Mathias Cormann adds that “AI is moving much faster than policy cycles” creating gaps [82-84].
– Middle-power and global-majority states can lead AI governance through pooled resources, normative influence, and regional networks.
Donahoe argues that “middle powers…can shape the direction of global AI practices” and that their collective power will determine whether governance moves beyond rhetoric [62-66]. Cormann highlights the need for “inclusion…objective evidence” and notes the OECD’s success in building consensus among many countries [77-80]. Singapore’s Minister Teo stresses translating science into policy and the importance of interoperable standards, while Malaysia’s Minister Gobind points to the ASEAN AI Safety Network as a model for regional coordination [103-110][152-156].
– Concrete infrastructure proposals: transparent incident reporting, an international incident-response centre, and open-source safety tools.
Cormann identifies “coordinated transparency and incident reporting” as the most critical frontier-AI safety infrastructure [91-92]. He describes the GPI AI Common Framework for Incident Reporting and the prospect of an international Incident Response Center [95-97]. He also mentions the OECD’s open-source safety-tool catalogue to make trustworthy AI easier to implement [98-99].
– Building institutional capacity, standards, and enforcement mechanisms is essential.
Teo uses the aviation-safety analogy to illustrate the need for rigorous testing, standards, and long-term research before policies are set [110-119][132-138]. Gobind emphasizes that standards and regulations must be backed by agencies capable of enforcement, and that ASEAN needs sustained political will and technical resources [162-166][172-173].
– Calls for a slowdown or even a provisional prohibition on super-intelligence development, and discussion of investors’ limited influence.
Cormann suggests occasional “pause, test, monitor, audit” to build public trust [84-86]. Jann Tallinn warns that the “cut-throat race” in labs is the biggest risk and cites the Future of Life Institute’s call for a prohibition until broad scientific consensus and public buy-in are achieved [207-214][226-227]. He later notes that investors now have little leverage over the leading AI firms [231-235].
Overall purpose / goal of the discussion
The panel was convened to diagnose the current “coordination gap” in frontier AI safety, highlight why middle-power and global-majority engagement is crucial, and outline concrete, near-term actions (incident-reporting frameworks, standards, institutional capacity, and possible slowdown measures) that policymakers can take within the next 12-24 months to make AI development safer and more trustworthy [57-66][91-99][240-250].
Overall tone and its evolution
The conversation begins with an urgent, almost alarmist tone about the speed of AI development and the lag in safety [1-4][57-59]. It quickly shifts to a collaborative, solution-focused tone as participants emphasize inclusive coordination, shared lessons, and concrete infrastructure [77-84][91-99]. Mid-discussion, the tone becomes more pragmatic, using analogies (aviation safety) and regional examples to stress the need for standards and enforcement [110-119][158-166]. Towards the end, a more cautionary and even admonitory tone emerges, calling for pauses, possible prohibitions, and highlighting the limited role of investors [84-86][207-214][256]. The closing remarks return to a hopeful yet urgent tone, reaffirming that the coordination gap is “real, urgent, and closable” [262-264].
Speakers
Speakers (from the provided list)
– Gobind Singh Deo – Minister (Malaysia), leading Malaysia’s 2025 ASEAN chairmanship; involved in AI governance and ASEAN AI Safety Network. [S1]
– Jann Tallinn – AI investor; founding engineer of Skype; co-founder of the Future of Life Institute. [S3]
– Mathias Cormann – Secretary-General of the Organisation for Economic Co-operation and Development (OECD). [S5]
– Sangbu Kim – Vice President for Digital and AI at the World Bank. [S6]
– Stuart Russell – Professor of Computer Science, University of California, Berkeley; Director of the International Association for Safe and Ethical AI (ICI). [S8]
– Nicolas Miailhe – Founder/CEO of AI Safety Connect; organizer of AI safety convenings and capacity-building initiatives.
– Eileen Donahoe – Founder and Managing Partner of Sympathico Ventures; former U.S. Special Envoy for Digital Freedom and Ambassador to UNHCR. [S14]
– Osama Manzar – Co-organizer (Digital Empowerment Foundation) for AI Safety Connect; involved in grassroots outreach. [S18]
– Josephine Teo – Minister for Digital Development and Information, Government of Singapore. [S20]
Additional speakers (not in the provided list)
– Cyrus – Host/moderator who introduced the session (mentioned in the opening remarks).
– Dick Schuh – Prime Minister of the Netherlands (mentioned as a guest speaker delivering a special address).
– Matthias Korman – (Same person as Mathias Cormann; already listed).
– Stuart Russell – (already listed).
– Other brief mentions: “Professor Stuart Russell” (already listed), “Osama Manzar” (already listed).
The session opened with Nicolas Miailhe warning that the “race towards artificial intelligence is no longer a theoretical pursuit” and that “billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence” while “safety is not keeping pace with it” [1-4]. He noted that AI Safety Connect was created to “help shape the frontier AI safety and secure agenda towards what I would frame as commonsensical AI risk management” and to “encourage global majority engagement into frontier AI safety” [6-8]. The event was co-hosted by the International Association for Safe and Ethical AI (ICI) and the Digital Empowerment Foundation, represented by Osama Manzar [11-15][12-13], and featured a special address by Prime Minister Dick Schuh of the Netherlands [9-10]. To achieve its aims, the organisation convenes semi-annual gatherings at major AI summits (Paris, India, upcoming Switzerland) and at the UN General Assembly, and also runs capacity-building and closed-door trust-building exercises [11-15].
Stuart Russell introduced the International Association for Safe and Ethical AI (ICI), describing it as “a global, democratic, scientific and professional society” with “several thousand members and approaching 200 affiliate organisations” [33-35]. He also joked that ICI is “the world’s worst acronym.” Russell framed AI safety as both a technical challenge (“how do we even build systems that have that property?”) and a governance challenge (“how do we ensure that those are the systems and only those systems get built?”) [40-42]. He stressed that harms such as psychological damage or loss of human control “cross borders” and therefore “global coordination is essential” [44-46].
Eileen Donahoe set the agenda by observing that the “race to AGI and superintelligence intensifies” while “the technology is advancing rapidly and being deployed with minimal guardrails” [56-57]. She argued that existing risk-management processes are “ill-adapted to the magnitude of the risk, fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators” [58-60]. Donahoe highlighted the strategic potential of “middle-power and global-majority states” to “leverage pooled resources, market leverage, normative influence and regulatory innovation” to shape AI safety, asserting that “leading from the middle may turn out to be a more powerful approach than previously anticipated” [62-65]. The panel’s purpose, she said, was to “identify present-day coordination gaps in the global AI practice and the global market” and to propose “practical steps policymakers can take in the coming months” [66-68].
Mathias Cormann (OECD) reflected on lessons learned from building consensus. He stressed that “trust is built through inclusion and on the basis of objective evidence” and that bringing together governments, companies, civil society and technical experts is essential because each “has a different perspective and different imperatives” [77-80]. He warned that “AI is moving much faster than policy cycles have traditionally moved,” creating gaps between innovation and necessary oversight [82-84]. Cormann advocated occasional “pause, test, monitor, audit, share information” to build confidence that systems respect fundamental rights [85-86]. Regarding infrastructure, he identified “coordinated transparency and incident reporting” as the most critical piece, citing the Hiroshima Code of Conduct and the emerging Global Partnership on AI (GPI) Common Framework for Incident Reporting, which already has 25 organisations submitting detailed risk-management reports [91-96]. He suggested that this framework could evolve into an “international AI Incident Response Center” that shares alerts without penalising reporters [95-97]. Cormann also announced an OECD open-call for open-source safety and evaluation tools, to be catalogued on the OECD.ai platform, thereby making trustworthy AI “easier to implement in practice” [98-99].
Singapore’s Minister Josephine Teo explained that smaller states “cannot set the rules” because the AI technologies they rely on “do not originate from our shores” [104-107]. Nevertheless, she argued that policymakers must “translate what we know from science into policy” through rigorous testing, simulations and interoperable standards. Using an aviation-safety analogy, she described how determining safe runway separation for A380s required “invest[ing] in the research… in the tests… in the simulations” and warned that differing national standards would create operational difficulties [110-119][132-138]. Teo concluded that “international collaboration through bodies such as the OECD, AI Safety Connect and ICI” is required to develop standards that are both scientifically sound and globally interoperable [140-144].
Minister Gobind Singh Deo (Malaysia) highlighted the ASEAN AI Safety Network as a concrete regional mechanism and noted Malaysia’s “dual-track approach of building national capacity while leading regional coordination” [152-156]. He warned that standards, regulations and legislation are ineffective without an “agency that can enforce it,” otherwise they remain “strong on paper but … not … have that impact” [162-166]. Deo called for sustained political will, technical capacity and resources to operationalise the network, and argued that ASEAN must first strengthen domestic institutions before moving to a collective regional framework [167-173].
Sangbu Kim, Vice-President for Digital and AI at the World Bank, described how the Bank can help Global South countries embed safety “from the design stage” by “partnering with advanced economies… and very high-end examples” to share red-team practices and build capacity [178-184][185-200]. He noted the paradox that AI is “the sphere” capable of penetrating any shield, yet “we also can build strong protective systems by fully utilizing AI,” underscoring the need for close collaboration between developing and advanced economies to stay ahead of emerging threats [196-199][200].
Jann Tallinn, co-founder of the Future of Life Institute, warned that the “cut-throat race” in top AI labs poses the greatest danger and called for a “slowdown” until two conditions are met: a broad scientific consensus that superintelligence can be developed safely, and strong public buy-in [210-214]. Tallinn illustrated the competitive climate with a recent photo of Narendra Modi, Dario Amadei and Sam Altman standing apart without linking hands, and noted that Amadei and Demis Hassabis had called for a slowdown at Davos [215-218]. He argued that massive funding streams could be leveraged as a lever for safety if public pressure is sufficient, but observed that “investors don’t play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them” as they head toward IPOs [221-227][232-235]. Tallinn reiterated the need to “slow down” and suggested that greater transparency about what AI leaders know would help create the political pressure required for a slowdown [256-257].
When asked to prioritise actions for the next 12-24 months, Minister Teo said the “AI safety research priorities need to be refreshed” because the field moves quickly, and that “we need to introduce better testing tools” to give developers practical assurance [240-249]. Cormann added that there is “no one thing that will make us all safe” and called for a “comprehensive” effort that catches up with innovation while deepening coordination [251-254]. Deo stressed the need to “institutionalise” AI-safety governance structures so they can keep pace with rapid technological change [253-255]. The panel collectively agreed that coordinated transparency, incident-reporting frameworks and the development of open-source safety tools are immediate priorities, while recognising that enforcement mechanisms and sustained institutional capacity remain open challenges [91-99][162-166][236-239].
Nicolas Miailhe closed by reaffirming that “the coordination gap frontier in AI safety is real, and it is urgent” yet “closable” [262-264]. He invited participants to the forthcoming UN General Assembly session in New York, where the fourth edition of AI Safety Connect will be hosted, hoping to continue the collective effort [265]. Osama Manzar concluded with a broader moral framing, urging that “the entire safety aspect of AI should be more from ‘please save people from AI’… we have to save human intelligence from artificial intelligence” and calling for strong safety guards and policy playbooks to be built into AI systems [266-276].
Overall, the discussion revealed strong consensus that AI risks are global and demand coordinated governance, inclusive evidence-based consensus-building, and robust capacity-building. Middle-power and regional actors were identified as pivotal levers for shaping standards, while concrete infrastructure proposals-transparent incident reporting, an international response centre, and open-source safety tool catalogues-were widely endorsed. Points of contention included the extent to which private investors can influence safety incentives, whether voluntary reporting or mandatory enforcement should dominate, and the preferred mechanism for slowing development (periodic pauses versus a provisional prohibition). These disagreements underscore the complexity of aligning diverse stakeholder interests into a coherent global AI-safety strategy.
that the race towards artificial intelligence is no longer a theoretical pursuit. As billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence, the technology is now advancing rapidly. And safety is not keeping pace with it. There are wonderful opportunities on the other side of this quest. There are also big risks. And so that’s the purpose, that’s the reason AI Safety Connect was founded. AI Safety Connect is there to help shape the frontier AI safety and secure agenda towards what I would frame as commonsensical AI risk management. AI Safety Connect has been founded to encourage global majority engagement into frontier AI safety. And AI Safety Connect, has been connected to showcase Concrete’s governance coordination mechanisms, tools, and solutions.
So how we do this? We convene at each AI summit. So last year we started in Paris, this year in India, next year we’re going to be in Switzerland. But we also convene at the UN General Assembly, right? We need a faster tempo for these safety discussions, so every six months we have this global convening. We also do capacity building, and we also do trust building exercises at times behind closed doors. Well, this week in New Delhi has been an intense one, an impactful one. On Tuesday we had a full day of panels, conference, solution demonstrations, and closed -door workshop discussions on some specific nuts to crack to advance AI safety. We, for example, at the privilege of, hosting Prime Minister Dick Schuh from the Netherlands on stage to deliver a special address on the role of top leadership in advancing AI safety.
We also engage with industry, engage with academia. of India and abroad. So we’re an extremely busy week beside our main event. We had this closed -door discussion that I was mentioning yesterday and today, this closed -door scientific dialogues. We’re going to publish the results soon that brought together senior industry leaders to discuss shared responsibility for AI safety. Well, obviously, none of this would happen without partnership. And we want to thank our co -hosts, the International Association for Safe and Ethical AI and its director, Professor Stuart Russell, to whom I will hand over the floor in a few minutes, and the Digital Empowerment Foundation who is anchoring us at the grassroots here with Osama Manzar,. We’ll close the session later on.
And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moderate that panel and we’re thankful for that. The Future of Life Institute, Ima and Yann, who’s been supporting this effort, and the Mindero Foundation, whose team is here as well with team. And it’s great to have your support and we are thankful for that. So today we’re about to hear from His Excellency Matthias Korman who’s the Secretary General of the OECD We’re going to hear from Her Excellency Minister Josephine Theo who’s the Minister for Digital Development and Information at the Government of Singapore. Thank you for your continuous support, really appreciate that Same for Jann Tallinn who’s the AI investor but also a founding engineer at Skype and the co -founder of the Future of Life Institute And last but not least, we also have Minister Teo who’s going to be with us from Malaysia Minister for Digital Development and Information Thank you Minister as well as Vice President Kim for Digital and AI at the World Bank So an extremely important conversation to have And before we welcome you to the stage I would like to hand over the floor to Professor Stuart Russell to say a few words and to speak about also what’s happening next week in Paris Thank you so much.
Thank you very much, Cyrus and Nico. So as Nico mentioned, the International Association for Safe and Ethical AI, or ICI, the world’s worst acronym, is a global, democratic, scientific and professional society. We have several thousand members and approaching 200 affiliate organizations. Our mission is to ensure that AI systems operate safely and ethically for the benefit of humanity. And as Nico mentioned, our second annual conference will take place in Paris starting on Tuesday. It’s still, I think, possible to register, but we’re already up over 1 ,300 people coming. It’s at UNESCO headquarters in Paris. Thank you. So achieving this mission of ensuring… that AI systems operate safely and ethically is partly a technical challenge. How do we even build systems that have that property?
But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this panel is mainly about this second challenge. And I think it’s one on which global coordination is essential because the harms, whether it’s psychological damage to the next generation or loss of human control altogether, those harms cross borders. And we must coordinate to make sure that they don’t happen or they don’t originate anywhere. And it’s, I think, fitting that we are having this summit here in India, which has really, among other things, championed the idea that everyone on Earth should have a say. And so with that, I will hand over to Eileen. Thank you very much.
Thank you, Stuart. So Dr. Eileen Donahoe is the founder and managing partner of Sympathico Ventures. She’s also the former U .S. Special Envoy and Coordinator for Digital Freedom and Ambassador to the UNHCR. Eileen? Welcome the speaker on the floor. Please, Your Excellency, Mr. Mattias Korman, Mr. Gobind Singh Deo, Mr. Josephine Teo, and Mr. Jann Tallinn, as well as Mr. Sangbu Kim, join us on stage.
Okay. Given this remarkable panel and the very short time we have, let me very briefly frame our discussion and get right to our speakers. So we’re here to share. Views on the opportunity for policymakers to impact international AI governance. As the race towards AGI and superintelligence intensifies, AI safety advocates face a compounding challenge. The technology is advancing rapidly and being deployed with minimal guardrails, while the risk management processes that do exist are either ill -adapted to the magnitude of the risk, fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators. The result is an unharmonized governance landscape that fails to shape the behavioral incentives. Of those building and funding frontier AI. Economies, governments, and societies do not respond well to such mixed signals.
While much of the discourse on frontier AI safety has focused on AI superpowers, there’s an urgent need for deeper international diplomacy on the most… extreme risks. At this juncture, middle powers and global majority states can’t be seen as peripheral actors in this landscape. Through pooled resources, market leverage, normative influence, and regulatory innovation, they can shape the direction of global AI practices and safeties. Leading from the middle may turn out to be a more powerful approach than previously anticipated. Whether or not that collective power is exercised now will determine whether international AI governance moves from the rhetorical level to the real -world impact on safety. This panel will aim to identify present -day coordination gaps in the global AI practice and the global market.
We will also look at the role of global AI in international AI safety and highlight practical steps policymakers can take in the coming months to close them. So to our panel, I’ll start with Secretary General Corman. The OECD has done remarkable work over the past decade, developing consensus on the OECD principles, providing a definition of AI systems that has resonated internationally, and playing an international role in operationalizing the Hiroshima International Code of Conduct. Along with those foundations, we now have the International AI Safety Report and the Singapore Consensus on Global AI Safety Research Priorities. With these principles, definitions, and frameworks in mind, two -part question for you. First, what are the key lessons learned from the process of building consensus and then implementing these frameworks?
And then second, looking ahead, what’s the most critical? What’s the most critical piece of coordinated frontier AI safety infrastructure we should be building now? Some have called for an international incident response center, and we’re all curious whether you think that should be a priority and achievable. Just some small, easy questions.
In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is built through inclusion and on the basis of objective evidence. And, you know, I think what we’ve learned over the last few years is that bringing together all the relevant actors, governments, companies, civil society, technical experts, is what we need to do. I mean, each has a different perspective and different imperatives. I mean, markets reward the private sector for speed, scale, and innovation. While governments must manage risk and protect the public interest without stifling progress. But a challenge, and it’s been mentioned in some of the opening remarks, a challenge for policymakers in this context is that AI is moving much faster than policy cycles have traditionally moved, which easily then creates gaps between innovation and progress and opportunity, but necessary oversight, mitigation and management of risk.
But all sides in this conversation do share an essential common interest, and that is to ensure that the systems that are developing are trustworthy, because without public trust in the end, even the most powerful AI tools will struggle to gain broad adoption. So that means that occasionally, and it’s not always popular with everyone, but occasionally we should slow down. Occasionally we should actually pause. Pause, test, monitor, audit, share information, and take the time and invest in building confidence that these systems can work as intended and respect fundamental rights. So that’s sort of, I guess, the first point. another critical lesson involves international consistency and this is part of the reason why these sorts of summits are so important is to really facilitate these conversations among countries and among different jurisdictions because national priorities can vary quite widely and there’s of course fragmentation and compliance cost related risks and at the OECD really what we’ve been doing for six decades now across different policy areas is to try and reduce fragmentation and by achieving alignment around key principles, building shared evidence and facilitating the necessary conversations to develop a more coherent better coordinated approach moving forward and on AI I mean we’ve developed the OECD principles which were first adopted in 2019 and which are now adhered to by 50 countries around the world and that was really the first globally recognized baseline for trustworthy AI The OECD’s lifecycle definition of an I .I.
system has since shaped policy frameworks from the EU I .I. Act to U .S. executive orders. And we’ve had just earlier the meeting of the Global Partnership on I .I. co -chaired by Korea and Singapore. We’ve got the OECD I .I. Policy Observatory, which is sort of essentially the broad gamut of all of the different policy approaches around the world to provide countries and industries with data and evidence on what’s being done, facilitating peer learning, and trying to take some of the politics and the rhetoric out of it, but really looking at the facts. Now, looking ahead, and you sort of ask a question here about what to do about the risk. I mean, the most critical piece of frontier I .I.
safety infrastructure is coordinated. transparency and incident reporting. I mean, the Hiroshima I .I. Process Code of Conduct and its reporting framework launched at the I .I.’s Action Summit in Paris last year. You know, that’s a promising step, and we’ve got to continue to develop that. Since their publication, 25 organizations across nine countries have already submitted detailed reports on how they manage I .I. risks, offering for the first time a comparable view of developer practices across jurisdictions. The next stage is to strengthen information sharing on I .I. failures and near misses. The GPI I .I. Common Framework for Incident Reporting aims to help us collectively learn from mistakes before they scale globally, and over time, this could evolve into an international I .I.
Incident Response Center, coordinating alerts between governments and labs without exposing companies to commercial or legal penalties for reporting in good flight. Finally, we do need to scale access to practical safety tools. With global partners, the OECD recently launched an open call for open source safety and evaluation tools hosted in the OECD .ai catalog of tools and metrics to make a trustworthy AI easier to implement in practice. I mean, these are some initiatives to form the foundation of a more transparent, data -driven, and interoperable AI governance ecosystem, and
Excellent. Minister Teo, a number of questions for you, but let me start with the fact that Singapore occupies a very distinctive position in the global geostrategic landscape as a pro -innovation, advanced knowledge economy, with deep commercial and diplomatic ties to both the U .S. and China. Thank you. As the race to AGI intensifies and bilateral tensions mount, is there a role for Singapore and other middle powers to play in bridging the coordination gap to keep scientific and safety channels open? And also, what’s the most important step middle powers can take in the next 12 months to help establish a shared minimum understanding of frontier safety?
Well, thank you very much for that question. I think there is no running away from the fact that for smaller states, and that includes Singapore, the technology that our companies, our citizens are going to rely on do not originate from our shores. So they don’t necessarily come within our jurisdictions. We don’t always get to set the rules. Having said that, I do believe that we’re not without. Thank you. agency. It doesn’t mean that we take a step back and just let things happen to us. There are still things that we can do. One of the most important things I think as policymakers is for us to think about what it takes to translate what we know from science into policy.
And I wanted to just say why this is so important. In our case, as policymakers, the key questions will always be, are the policies that we make effective? And also, policies always come with trade -offs. With the question of effectiveness, there is always a need to understand what actually works, as opposed to what looks good on paper. With the question of trade -offs, it’s about understanding what we lose as a result of whatever safety aspects it is that we choose to put in place. And whether we can minimize them, can we mitigate them? Now, in areas where safety is the objective, we can’t just go with gut. We can’t just go with speculation. You take, for example, in my previous life, I was working on promoting Singapore’s Air Hub.
And we had to deal with a question of aviation safety. We were expanding our airport. It was going to carry many more passengers in and out of the country. But we are limited by number of runways. And in landscape Singapore, you can’t just click your finger and say, let’s build a new one. It’s a long runway. It’s very expensive anyway. Then there is the question of what do you do when you have these jumbo jets like A380s? Because each time an A380 hits the runway. It creates so much of a blast that you really need to create more distance between the A380 taking off and the next aircraft that is scheduled to take off.
Now, this is not a question that the transport minister can just decide on a whim. The air traffic management has to decide on its policy of how much distance is considered safe between landings or rather between takeoffs. And to answer this question, you really need to invest in the research. You need to invest in understanding the tests. So the science is one part of it. But between science to policy, you are actually going to need a lot of time. You are going to need a lot of tests. You are going to need a lot of simulations. you need to understand whether the distances that you decide are safe works well in a thunderstorm, a tropical thunderstorm.
Does it work just as well in a snowstorm? Well, we don’t have snow in Singapore. But you think about the airline that operates this. If each country that they fly into has a different safety distance, that creates some difficulty. So we therefore think that not only is there a need to invest in understanding the science, not only is there a need in understanding what testing looks like, what good testing looks like, there is also a need for us to think about what standards that will eventually be interoperable, what do they look like, which is why we think that international efforts, the collaboration that… that is being carried forward by the OECD through the Global Partnership on AI, the AI Safety Connect effort, and also ICI.
Where is Stuart now? Those kinds of efforts, you can’t do away without. At the outset, there is likely to be a bit of a fragmentation. And the trade -off with not having these conversations is that we are not even going to make advances in AI safety. And I don’t think that that’s a very good place for us to be in. It doesn’t give us the assurance that we can deliver to our citizens. And it does not create a foundation of trust that will eventually help us to push ahead with the use of this technology on a wider scale. So that’s how we are thinking about it, Aileen. Thank you.
So let me turn to Minister Gobind from Malaysia. and note that under your leadership and Malaysia’s 2025 ASEAN chairmanship, Malaysia succeeded in placing AI at the center of ASEAN’s agenda by establishing the ASEAN AI Safety Network. Malaysia is now finalizing its own AI National Action Plan, and Malaysia’s AI Governance Bill is expected in Parliament in 2026. So this dual -track approach of building national capacity while leading regional coordination represents a model of middle power agency that other countries are watching closely. So what lessons do you think other middle powers can draw from Malaysia’s experience? And on the ASEAN AI Safety Network, we have to note that operationalize and it will require sustained political will. technical capacity and resources.
So what concrete steps must ASEAN take in the next 12 to 18 months to ensure that this isn’t just aspirational?
Online fraud, for example, scams, you have deepfakes today, you have huge concerns about certain vulnerable groups that are going to be impacted, children, older folk and so on and so forth. So this is something that stretches across the region. How do we deal with it in a coordinated way and ensure that the conversation doesn’t just stop with the government of the day, but it’s a conversation that expands over a period of time with clear policies that we can actually execute. The second layer that I think we need to think about is in the event there’s a need for execution. When we speak about risks in AI and we speak about how we’re going to govern these risks, we often talk about standards.
We often talk about regulation. We even speak about legislation at times for areas that pose higher risks. But ultimately, it really comes back down to you making sure you have an agency that can enforce it, because you can have the best standards. regulations and legislation but if there is no institution that’s really able to implement those standards to ensure that they are properly implemented and also to ensure that rules for failure to implement are enforced then those standards regulation and policies are really going to be just strong on paper but they’re not going to really have that impact that you need. So again, how do you build this mechanism across ASEAN where every country strengthens themselves domestically first and then moves across to the ASEAN member states and hopes to learn from their experiences so that we can together move ahead in this new world of AI and I think the threats that we anticipate in future.
Now the third part which is really important is also ensuring that whilst this goes on, you create those policies, you have institutions that enforce and the discussions persist at an ASEAN level. I think what is important is also to have that expertise looking at what comes next. We must make sure that our countries are prepared for the risks that are to come with the next generation technology. This is important because you don’t want a situation where new technology is adopted and there are risks that come with this new technology, you’re not prepared. I think that’s something we want to avoid and that’s the reason why I come back to where I started off. We really need to look at building institutions that have the expertise and of course are able to sustain as we go along and to build and deliver something that’s impactful.
Sorry, but that’s in short what we’re doing in Malaysia today.
Excellent. Thank you so much. Okay. Let me turn to Vice President Kim and talk about the World Bank, which has been at the forefront of digital public infrastructure, helping countries leapfrog legacy systems. We note that frontier AI systems, though, are arriving in the global south under very different conditions from previous waves of technology and governments are under pressure to deploy AI systems quickly. often using models that haven’t been adequately tested, let alone certified for their context, languages, or risk tolerances. So how can the World Bank help Global South countries move from being passive recipients of frontier AI to active shapers of safety and reliability requirements before the systems are deployed at scale?
Thank you. In one word, definitely we need to make our clients well prepared from the scratch. When they design the AI systems, definitely they need to design the safety architecture within the system. That’s very, in general, that’s very correct. But real challenge is that… nobody can really expect a new type of new threat especially our some countries in a low capacity it is really hard to figure out what that will be so that’s the in order to tackle that type of irony and dilemma we need to very closely working with very developed economies company and government and very high end examples so that we can really well connect those good examples to the developing world so one partnership is one of the good examples we are helping our country for example some big tech company who is running some red teams so that you they are trying very hard to attack their system in advance by fully utilizing AI.
So through that type of practice and experiment, they can learn how to prevent the AI attack in the future, which is pretty much possible. So in this way, it is inevitable for our developing countries to keep track on the new trend and new innovation, even in this safety protection area. It is the only way. So I have to admit that this constraint. But think about this. Some anecdotal story in East Asia, in China and in Korea, there’s two models. Merchant who is selling two products. Number one is. sphere. And then they keep saying that this sphere is so strong so it can get through any kind of shield. So this is one vendor. The other vendor is selling shield.
And then they are saying that this shield is one of the most safe and strong shields. No sphere can get through this shield. This is exactly an ironical situation. If you think about AI, AI attack is the sphere. AI is so strong and smart and really capable so it can get through and hack any system with high -end intelligence and knowledge. But good news is that on the other hand, we also can build strong protective systems. by fully utilizing AI. So this is one good news, but the constraint is that we do not clearly know how AI can really evolve to fully protect those big attacks in the future. So in order to solve this type of ironical situation from the developing world point of view and from the World Bank point of view, this is the only way to very closely work and collaborate and learn from the advanced technology and advanced company and advanced country.
Thank you so much. Last but not least, Mr. Jan Tallinn, you occupy a very rare position in this landscape as a founding engineer of Skype, an early investor in DeepMind and Anthropic, and you’re also the co -founder of the Future of Life, which last October released a statement on superintelligence. calling for prohibition on superintelligent development until two conditions are met. Number one, broad scientific consensus that it can be done safely and controllably, and second, strong public buy -in. Let’s just ask the hard question. What would an effective prohibition look like in practice? How could that work?
Thank you very much. So I think I’m kind of like a little bit different from the people on this panel. And that too, I guess. That I’m kind of, my main kind of threat vector about, my main worries about future are less about like how AI is being deployed and diffused and taken into practice. And I’m way more worried about what is happening in the labs, in the top AI companies. I’m not sure what the future is going to look like. because they are now in a cutthroat race to build something that is smarter than they are. They are in a cutthroat race to build superintelligence. And, like, I mean, we just saw yesterday the picture where, with a photo of it, Narendra Modi, Dario Amadei, and Sam Altman refused to link hands.
I mean, this is, like, indicative. We also saw both Dario and Demis Hassabis call for a slowdown in Davos last month. They just can’t do it alone. And I think there are, like, two reasons why it’s, like, an unfortunate situation. One is that the U .S. as a country is conflicted. They basically rely on AI for their economic and competitive power. So they are, like, very hesitant to, kind of, meddle with now. cutthroat situation in AI companies and the rest of the world really doesn’t understand how big danger they are now. So it’s part of the reason why we did the superintelligence statement is to create awareness that there is increasing political demand to do something about this situation.
We now have more than 130 ,000 signatures which is like many times more than we had done our original six months post letter had in 2023. So yeah, that’s like if there was enough pressure, I think clearly like the rest of the world is still kind of more powerful than the kind of leading AI countries. There are more people, there’s more economic power, etc. So if there was like enough pressure this could be solved. Like the way I put it is that it’s super hard to do like a $10 billion project. it’s impossible to do it if it’s illegal. So having these trillions flow into AI actually makes it easier to govern than harder.
So I’m tempted to follow up with a question about investors and their potential role in this. They are obviously playing a decisive role in shaping the incentives, but they’re largely absent from the governance conversation. So what would it take to bring investors meaningfully into the safety conversation?
So, yeah, I think the answer is kind of simple. I don’t think investors play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them. They will now IPO soon. And if you are like an IPO market, there is… like, like, so level playing field, which means that like, if somebody’s not funding, somebody else will. So I don’t think investors, investors could have affected things, but like, five, 10 years ago.
Great. Okay, so since we’re running short on time, I’m going to ask one question, and ask you all to answer it, which is about the 12 month window. Oh, the very shortly, each shortly. Many in the AI safety community believe we have a narrow window, perhaps 12 to 24 months before frontier AI capabilities advance beyond our ability to evaluate and govern them. So what would you recommend is prioritized between now and we’re basically in the next year to two years, each of you to enhance safety? and security?
I think there are two, really. I think the AI safety research priorities need to be refreshed because the field has moved so quickly. The Singapore consensus identified a set, but as soon as they are published, we recognize that they will be out of date. So we need to refresh it. That’s why we’re going to have the second edition, you know, worked on. Hopefully in a few months. The second thing I think is that we can’t just keep thinking about frameworks, you know, and guidelines. At some point, we need to be able to introduce better testing tools. And until we are able to do so, the companies that are developing and deploying AI models, they also don’t have a very practical way of giving assurance.
So I’d like to see in the next 12 months some further advancements. In those two areas.
I’ll be really quick I know there’s always a temptation in these sorts of conversations, what is the one thing that can sort of fix it all and the truth is there’s not one thing we’ve got to go as fast as we can to play catch up to a degree but we’ve also got to go as comprehensive and as deep as we can there’s just no alternative, there’s catch up to be played, we’ve got to put a real effort and it’s got to be right across the board and I don’t think that you can just say there’s the one thing that will make us all safe and it’s going to be okay.
Minister Gobind?
I think as I said earlier, we need to start thinking how we can build structures and perhaps institutionalize this entire conversation about building security around AI and its governance in this regard, we have to understand that things are going to move very quickly and you’re going to see new technology develop very fast which brings new risks as well, so in that regard, you’ve got to build something that’s sustainable and I think in order to do that, institutionalizing it should be a priority.
everyone is really rushing for ai system development ai solution development that means ai is currently ai safety measures currently under invested so i really like to urge all of us to think about this is not free you know things we need to spend some money to protect the system in advance from the scratch when you design the system so that means we should allocate some money to fully invest in in the
Jann Tallinn?
so slow down we really need to slow down that the companies are asking for it and if we like instrumental to that would be basically transparency like more people should know what the leaders of ai companies know in order to basically understand how crucial the slowdown now is
okay great well i believe we have a little bit of a close coming and thank you all so much i wish we had had a day to talk about all of these issues. But thank you so much. Thank you very much.
Thank you very much, Eileen, and this fantastic panel, excellencies, colleagues, friends. What we’ve heard today confirms something important. The coordination gap frontier in AI safety is real, and it is urgent. And as we’ve discussed today, it is closable. And before I hand over the floor to Osama Manzara to close off for a few minutes of remarks and reflection, I’d like to invite you all to the United Nations General Assembly next edition in New York, where we hope to organize the fourth edition of AI Safety Connect, and hopefully with many of the great policymakers and leaders we have heard from today, to carry forward that collective effort. Osama, the floor is yours.
Well, thank you very much. And we are one of those absentee co -organizer in this one. So, you know, because being a local, but I just want to I mean, apart from thanking each one of you who didn’t get up and, you know, go out of the room. And every one of you who gave all the safety remarks before usage of AI on behalf of 40 million people that we have reached out in the last 23 years. And billions of the other people whom we are going to work for. I want to suggest that the entire safety aspect of AI should be more from please save people from AI. Right. Because that’s the safety like it’s a car on the road.
You know, we have to save people before you teach people how to think. So we also have to keep a very, very strong thing. How do we save human intelligence from artificial intelligence? And how do we inbuilt in the safety guards and all the ethics and all the all the, you know, policy playbooks? Thank you very much. Thank you. Bye. Thank you. Thank you.
This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incentives driving rapid AI development and the slower, more deliberative pace of poli…
EventOnce brought to commercial existence, digital technologies raise multiple safety and security issues, which could have been anticipated but which the producers and society at large ignore at the early…
BlogRapid pace of technological change outpacing policy frameworks
EventThe analysis comprises multiple arguments related to technology, politics, and AI. One argument suggests that the rapid pace of technological change surpasses the speed of political and regulatory deb…
EventIvana Bartoletti: Thank you very much and so sorry for not being able to be physically with you. So I think I wanted to say on this really interesting question just a couple of things. The first is th…
EventTomiwa Ilori:Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiatives in Africa on AI governance. And quickly, according to the African Observato…
EventThe European Union, China, and the United States may set benchmarks for AI governance. Still, Asia’s middle powers have the potential to shape a regulatory framework worldwide. The exchange of regulat…
UpdatesWhile much of the discourse on frontier AI safety has focused on AI superpowers, there’s an urgent need for deeper international diplomacy on the most… extreme risks. At this juncture, middle powers…
Event_reportingIntersessional technical meetings and working groups should focus on critical infrastructure, incident response, and international law applicability. Regular and intersessional meetings were suggeste…
EventPractical tools for incident response and cooperation still need development
EventChair: Thank you very much, Ms. Nakamitsu, for your very detailed and comprehensive overview of the work that we have done, as well as the challenges that lie ahead for our working group. Disting…
EventCapacity building is essential for political and institutional resource development.
EventInstitutions should have the capacity for enforcement to ensure adherence to any rules that are set in place
EventNeed law enforcement, judiciary, court system, judges to understand cyber space and offenses, lawyers to be trained, police to be trained, and technical experts. It’s a huge investment requirement Sa…
EventInternational Development Law Organization: Mr. President, Excellencies, it is a pleasure to participate in the summit on behalf of IDLO, the only global intergovernmental organization exclusivel…
EventInstitutional capacity building is vital for civil societies. By strengthening their institutional structures, civil society organisations can better engage with governments and stakeholders to influe…
EventMore than 850 public figures, including leading computer scientists Geoffrey Hinton and Yoshua Bengio,have signeda joint statement urging a global slowdown in the development of artificial superintell…
UpdatesHe points out that some private‑sector actors deliberately slow standards development, and calls for mechanisms that impose time limits to ensure standards are in place when needed.
EventBitcoin experienced a 6% drop on 27 January, as stock markets reacted to the debut of China’s open-source AI model, DeepSeek, which some have dubbed ‘AI’s Sputnik moment.’ The new model developed on a…
Updates“The race towards artificial intelligence is no longer a theoretical pursuit; billions and maybe trillions of dollars are being deployed to push the AI frontier, and safety is not keeping pace with it.”
The knowledge base states that the race toward AI is no longer theoretical, that billions-trillions of dollars are being invested, and that safety is lagging behind the rapid technological advance [S1].
“The coordination gap frontier in AI safety is real, urgent, and can be closed.”
A stakeholder’s opening remarks explicitly note that the coordination gap in AI safety is real and urgent, echoing the panel’s assessment [S11].
“Artificial intelligence is advancing at a rapid pace.”
An open-forum primer describes AI as advancing rapidly, providing broader context for the claim about fast technological progress [S108].
“Technological development in AI is not without risk.”
Discussion notes highlight that AI development carries risk, adding nuance to the safety concerns raised in the report [S96].
There is strong consensus that AI safety is a global challenge requiring coordinated governance, inclusive evidence‑based consensus building, and robust capacity‑building. Middle powers and regional bodies are seen as pivotal actors, and concrete infrastructure such as incident‑reporting mechanisms and open‑source safety tools are widely endorsed. Participants also agree on the need for periodic slow‑downs, testing and investment in safety‑by‑design.
High consensus on the need for global coordination, inclusive governance, capacity building and investment; moderate consensus on specific mechanisms (incident response centre) and on the role of funding as a lever. This broad agreement provides a solid foundation for advancing coordinated policy initiatives and allocating resources toward practical safety tools and regional cooperation.
The panel largely concurs on the necessity of coordinated AI governance, but diverges on the mechanisms to achieve safety—ranging from voluntary transparency and incident reporting, to enforced institutional compliance, to periodic pauses, to outright prohibitions. A notable unexpected split concerns the perceived role of private investors, with one speaker viewing them as a potential lever and another dismissing their influence. These disagreements highlight the challenge of aligning diverse stakeholder perspectives into a coherent global safety strategy.
Moderate to high. While there is broad consensus on the goal of AI safety, the lack of agreement on concrete levers—investor engagement, enforcement versus voluntary reporting, and the preferred slowdown mechanism—suggests that achieving unified policy action will require substantial negotiation and compromise.
The discussion was shaped by a handful of pivotal insights that moved it from a generic acknowledgment of AI risks to a nuanced exploration of governance levers. Stuart Russell’s framing of the dual technical‑governance challenge set the agenda, while Eileen Donahoe’s spotlight on middle‑power agency broadened the geopolitical lens. Mathias Cormann’s call for trust‑building pauses and incident reporting introduced concrete procedural tools, which were elaborated by Mathias Cormann and later reinforced by Gobind Singh Deo’s insistence on enforcement capacity. Josephine Teo’s aviation analogy and Sangbu Kim’s sphere‑shield metaphor grounded abstract concepts in real‑world analogies, prompting concrete discussions about standards, testing, and collaborative safety tool development. Jann Tallinn’s stark warning about lab‑level risks and the feasibility of a prohibition injected urgency and highlighted the limits of market‑based solutions, leading to a brief debate on investor influence. Collectively, these comments redirected the conversation toward actionable, inclusive, and internationally coordinated governance mechanisms, culminating in a consensus that the coordination gap is real but bridgeable through inclusive institutions, transparent reporting, and sustained political pressure.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

