Policymaker’s Guide to International AI Safety Coordination
20 Feb 2026 17:00h - 18:00h
Policymaker’s Guide to International AI Safety Coordination
Summary
The AI Safety Connect summit in New Delhi was convened to confront the accelerating race toward artificial intelligence and the widening gap between rapid technology deployment and adequate safety measures [1-5][6-9][11-15]. Organizers highlighted that AI Safety Connect brings together policymakers, industry, and academia through semi-annual global convenings at AI summits and the UN General Assembly to foster faster, inclusive safety discussions [11-22][24-30]. Stuart Russell of the International Association for Safe and Ethical AI emphasized that ensuring AI systems operate safely is both a technical and governance challenge that requires coordinated international action because harms cross borders [39-46], noting the upcoming second annual conference in Paris as an opportunity for stakeholders to collaborate on these governance issues [36-38].
Eileen Donahoe framed the policy discussion by warning that current AI governance is fragmented, with minimal guardrails and insufficiently binding risk-management processes, creating mixed signals for developers and investors [56-60]. She argued that middle-power and global-majority states can leverage pooled resources, market influence, and regulatory innovation to shape AI safety, and that their active participation is crucial to move from rhetoric to real-world impact [62-66]. Mathias Cormann stressed that trust in AI is built through inclusive, evidence-based processes and that international consistency-exemplified by the OECD’s 2019 AI principles adopted by 50 countries-helps reduce fragmentation and align policy frameworks [77-80][86-88]. He identified coordinated transparency and incident reporting, such as the Hiroshima AI Code of Conduct and the emerging Global Partnership on AI Incident Reporting Framework, as the most critical frontier-AI safety infrastructure to develop now [91-96], and highlighted the OECD’s open-source safety-tool catalog as a means to make trustworthy AI more practicable for developers worldwide [98-99].
Singapore’s Minister Josephine Teo explained that smaller states must translate scientific insights into effective policies, invest in rigorous testing and simulations, and cooperate internationally to create interoperable safety standards [103-110][115-122][132-138][141-146]. Malaysia’s Gobind Singh Deo added that without dedicated agencies to enforce standards, regulations remain paper-only, and that ASEAN needs institutional mechanisms to sustain AI governance across the region [162-166][168-173]. Sangbu Kim of the World Bank argued that developing countries require early-stage capacity building and partnerships with advanced economies to embed safety architecture into AI systems before large-scale deployment [176-182][184-190].
Jann Tallinn warned that the most pressing risk lies in unchecked laboratory races toward superintelligence, calling for a slowdown, greater transparency, and noting that private investors now have limited influence over leading AI firms [210-218][221-226][231-236]. The closing remarks reiterated that the coordination gap in frontier AI safety is real and urgent but can be closed through continued global collaboration, with a call to convene again at the UN General Assembly and the next AI Safety Connect edition [260-264].
Keypoints
Major discussion points
– Urgent need for global coordination on AI safety, especially involving middle-power and “global-majority” states.
Eileen frames the problem as fragmented, non-binding risk-management across jurisdictions and stresses the role of middle powers [55-63]. Stuart Russell stresses that AI harms cross borders and require coordinated governance [44-46]. Mathias Cormann adds that trust is built through inclusive, evidence-based dialogue among governments, industry and civil society [77-84].
– Proposed concrete governance infrastructure: transparent incident reporting, an international incident-response centre and open-source safety tools.
Cormann describes the “coordinated transparency and incident reporting” framework, the Hiroshima Code of Conduct reporting system, and the emerging Global Partnership on AI (GPI) incident-reporting common framework that could evolve into an international response centre [91-98]. Earlier, the panel raised the idea of an international incident-response centre as a priority [75-76].
– National and regional initiatives as models for collective action.
Singapore’s Minister Josephine Teo explains the need to translate scientific knowledge into policy, invest in testing, and develop interoperable standards, while highlighting OECD-led efforts such as the Global Partnership on AI [103-146]. Malaysia’s Minister Gobind Singh Deo outlines the ASEAN AI Safety Network, the importance of enforceable agencies, and the need for sustained regional institutions [152-173][158-173].
– Industry dynamics, superintelligence risk, and the limited influence of investors.
Jann Tallinn warns that the primary danger lies in the “cut-throat race” within leading AI labs, calls for a slowdown, and notes the difficulty of influencing these firms through investors, who are now largely sidelined [207-226][231-236].
– Immediate 12-month priorities: refresh research agendas, develop practical testing tools, and institutionalise AI-safety governance.
Eileen asks panelists to identify actions for the next year [236-239]. Josephine Teo stresses updating the Singapore consensus and advancing testing tools [240-249]. Gobind Singh Deo stresses building sustainable institutions to embed the conversation [253]. Cormann underscores the need for a comprehensive, rapid catch-up across all fronts [251].
Overall purpose / goal
The discussion was convened to surface and narrow the “coordination gap” in frontier AI safety, to showcase existing international frameworks (OECD principles, Singapore consensus, AI Safety Connect), and to generate concrete, near-term actions that policymakers, middle-power states, and multilateral bodies can take to shape a trustworthy, globally coordinated AI governance regime.
Overall tone
The tone begins with a sense of urgency and alarm about rapid AI progress outpacing safety measures [1-4][57-60]. It quickly shifts to a collaborative, solution-oriented mood as participants share existing initiatives and propose concrete infrastructure [77-98][103-146]. Mid-conversation, a more cautionary and even confrontational tone emerges when discussing the “cut-throat race” in labs and the need for a slowdown [210-218][207-226]. The closing remarks return to an optimistic yet urgent call for coordinated action and institutionalisation [260-264][266-277].
Speakers
– Osama Manzar
– Area of expertise / role: Local anchor and co-organiser for AI Safety Connect; works with the Digital Empowerment Foundation on grassroots engagement.
– Title / affiliation: Representative of the Digital Empowerment Foundation (co-organiser)[S2]
– Jann Tallinn
– Area of expertise: AI safety advocacy, AI governance, philanthropy, and technology investment.
– Title / affiliation: Founding engineer of Skype; early investor in DeepMind and Anthropic; co-founder of the Future of Life Institute[S4]
– Stuart Russell
– Area of expertise: Artificial intelligence research, AI safety, and ethics.
– Title / affiliation: Professor of Computer Science, University of California, Berkeley; Director of the International Association for Safe and Ethical AI (ICI)[S5]
– Gobind Singh Deo
– Area of expertise: AI policy, regulatory frameworks, and regional coordination.
– Title / affiliation: Malaysian Minister (Minister for Digital Development and Information) and leader of Malaysia’s 2025 ASEAN chairmanship[S8]
– Mathias Cormann
– Area of expertise: International policy coordination, AI governance, and standards development.
– Title / affiliation: Secretary-General of the Organisation for Economic Co-operation and Development (OECD)[S11]
– Josephine Teo
– Area of expertise: Digital development, AI policy, and regulatory implementation.
– Title / affiliation: Minister for Digital Development and Information, Government of Singapore[S13]
– Nicolas Miailhe
– Area of expertise: AI safety strategy, convening global AI safety stakeholders, and capacity building.
– Title / affiliation: Founder and lead of AI Safety Connect (organiser of the AI Safety Connect summit)
– Sangbu Kim
– Area of expertise: Digital infrastructure, AI safety implementation in development contexts.
– Title / affiliation: Vice President for Digital and AI, World Bank[S18]
– Eileen Donahoe
– Area of expertise: AI governance, digital freedom, and human rights advocacy.
– Title / affiliation: Founder and Managing Partner, Sympathico Ventures; former U.S. Special Envoy and Coordinator for Digital Freedom; former Ambassador to the UNHCR[S21]
Additional speakers:
– Prime Minister Dick Schuh – Prime Minister of the Netherlands (mentioned as a guest speaker delivering a special address).
– Mattias Korman / Matthias Korman – Referred to in the transcript as “His Excellency Matthias Korman”; appears to be the same individual as Mathias Cormann, Secretary-General of the OECD (possible typographical variation).
The summit opened with Nicolas Miailhe warning that the race toward artificial intelligence has shifted from a theoretical pursuit to a massive, financially-driven endeavour, with billions – possibly trillions – of dollars being poured into frontier AI research while safety measures lag behind [1-4]. He framed AI Safety Connect as a response to this imbalance, describing it as a platform that “helps shape the frontier AI safety and secure agenda” and that “encourages global majority engagement into frontier AI safety” [6-9]. The organisation convenes semi-annual global meetings at major AI summits and the UN General Assembly, aiming to accelerate safety discussions, build capacity and conduct closed-door trust-building exercises [10-15]; AI Safety Connect also hosted closed-door scientific dialogues with senior industry leaders, the findings of which will be published soon [15-16]. The New Delhi week featured a full day of panels, solution demonstrations and a closed-door workshop, even hosting Prime Minister Dick Schuh of the Netherlands for a special address on leadership in AI safety [16-22]. Miailhe thanked co-hosts – the International Association for Safe and Ethical AI (ICI) directed by Professor Stuart Russell and the Digital Empowerment Foundation – as well as sponsors such as Sympathico Ventures, the Future of Life Institute, Ima and Yann, and the Mindero Foundation, before introducing the panel of senior policymakers [23-30].
Stuart Russell then introduced the International Association for Safe and Ethical AI (ICI), a “global, democratic, scientific and professional society” with several thousand members and approaching 200 affiliate organizations [33-35] (Russell humorously noted that ICI is “the world’s worst acronym”). He announced the second annual ICI conference in Paris, already attracting over 1 300 registrants [36-38]. Russell distinguished the technical challenge of building safe systems from the governance challenge of ensuring only those systems are built, arguing that the latter “requires coordinated international action because the harms… cross borders” [40-46]. He highlighted India’s role as a champion of universal participation, noting that the summit’s location underscored the need for inclusive global coordination [46-47].
Eileen Donahoe framed the policy discussion by pointing out that AI is being deployed “with minimal guardrails” and that existing risk-management processes are “ill-adapted, fragmented across jurisdictions, or insufficiently binding” [56-60]. She warned that this creates an “unharmonised governance landscape” that sends mixed signals to developers, investors and regulators [58-60]. Donahoe then argued that middle-power and “global-majority” states can leverage pooled resources, market influence and regulatory innovation to move AI safety from rhetoric to real-world impact [62-66]. She posed two questions to the panel: what lessons have been learned from building consensus on AI safety frameworks, and whether an international incident-response centre should be a priority [71-76].
Mathias Cormann responded by stressing that trust in AI is built through “inclusion and on the basis of objective evidence” and that bringing together governments, industry, civil society and technical experts is essential [77-80]. He noted the speed mismatch between AI innovation and policy cycles, which creates gaps that must be bridged by occasional pauses for testing, auditing and information sharing [84-86]. Cormann highlighted the OECD’s 2019 AI Principles, now adopted by 50 countries, as the first globally recognised baseline for trustworthy AI [88]. He identified “coordinated transparency and incident reporting” as the most critical frontier-AI safety infrastructure, citing the Hiroshima AI Code of Conduct reporting framework and the emerging Global Partnership on AI (GPI) AI Common Framework, which could evolve into an international AI Incident Response Centre [91-96]. He also described the OECD’s launch of an open-call for open-source safety and evaluation tools, now hosted in the OECD.ai catalogue [98-99]; the OECD AI Policy Observatory provides data and evidence on policy approaches, facilitating peer learning and removing rhetoric [88-90].
Singapore’s Minister Josephine Teo explained that smaller states cannot dictate the rules for AI systems that originate elsewhere, but they can still act by “translating what we know from science into policy” [103-110]. She underscored the need to assess the effectiveness of policies and to understand trade-offs, drawing an analogy with aviation safety where runway separation distances are set only after extensive research, testing and simulation [119-138]. Teo argued that international collaboration – through the OECD’s Global Partnership on AI, AI Safety Connect and ICI – is required to develop interoperable standards and avoid fragmented compliance costs [141-146]. Singapore is preparing a second edition of its AI safety research consensus, expected in the coming months [240-249].
Malaysia’s Minister Gobind Singh Deo described the ASEAN AI Safety Network, established under Malaysia’s 2025 ASEAN chairmanship, as a “dual-track approach” that builds national capacity while leading regional coordination [152-158]. He warned that standards and regulations are ineffective without an agency capable of enforcement, noting that without such institutions “the standards… remain strong on paper but have no impact” [162-166][168-173]. Deo called for the institutionalisation of AI-safety governance across ASEAN, with sustained political will, technical capacity and resources to move beyond aspirational goals [157-173].
Sangbu Kim, Vice-President for Digital and AI at the World Bank, argued that developing-world countries need “early-stage capacity building” and partnerships with high-capacity economies and firms to embed safety architecture from the design stage [176-183]. He described red-team exercises with large tech companies as a way for low-capacity nations to learn how to defend against AI-driven attacks, using the metaphor of AI as a “sphere” that can penetrate any shield, but which can also be countered by a strong “shield” built with AI itself [184-190][196-199]. Kim stressed that continuous collaboration is the only way for the Global South to keep pace with emerging threats [191-194][200-202].
Jann Tallinn, co-founder of the Future of Life Institute, warned that the most serious danger lies in the “cut-throat race” inside leading AI labs to achieve superintelligence [210-214]. He cited recent public calls for a slowdown from AI leaders and argued that political pressure, demonstrated by the 130 000-signature petition, could make a prohibition on superintelligent AI feasible if there is broad scientific consensus and public buy-in [221-226]. Tallinn also asserted that private investors now have little influence over frontier AI firms, which are moving toward IPOs and can replace any missing funding, rendering investor-driven safety interventions largely ineffective [231-236].
In his closing remarks, Nicolas Miailhe reiterated that the “coordination gap” in frontier AI safety is both real and urgent, but “closable” through continued global collaboration [260-263]. He invited participants to the next UN General Assembly session in New York, where the fourth edition of AI Safety Connect will be convened [264-265]. Osama Manzar then broadened the perspective, urging that AI safety be framed as “saving people from AI” and that protecting human intelligence from artificial systems requires strong ethical guardrails and policy playbooks [266-277].
Across the panel, speakers repeatedly emphasized that inclusive, multi-stakeholder coordination is essential for frontier AI safety [10-15][44-46][77-86][56-66]; that transparent incident-reporting and a potential international response centre constitute critical infrastructure [91-96]; that capacity-building partnerships can help low-capacity nations embed safety-by-design (Miailhe, Kim) [15-16][176-183]; and that occasional pauses or a deliberate slowdown are needed to build trust [84-86][210-214]. Disagreements emerged around the role of investors – Donahoe called for their meaningful inclusion [228-230] while Tallinn argued they no longer wield influence [231-236] – and over whether a slowdown should be the primary lever (Tallinn) or whether coordinated transparency and incident-reporting infrastructure should take precedence (Cormann) [208-227][91-96]. A further tension concerned the search for a “silver-bullet” solution; Cormann rejected a single fix in favour of a comprehensive catch-up across technical, regulatory and institutional dimensions [251-254], whereas Tallinn promoted a slowdown as the pivotal measure [208-227].
Key take-aways include: (i) AI safety is lagging behind rapid AI development and requires both technical and governance solutions; (ii) AI Safety Connect provides a semi-annual convening platform to accelerate safety dialogue and capacity-building; (iii) middle-power and global-majority states can shape international AI practices through pooled resources and normative influence; (iv) trust is built through inclusive, evidence-based processes and coordinated transparency; (v) the OECD’s incident-reporting framework and open-source safety-tool catalogue are foundational for a transparent AI governance ecosystem [88-90]; (vi) regional mechanisms such as the ASEAN AI Safety Network illustrate how national capacity-building can be combined with collective coordination; (vii) partnerships with advanced economies and firms enable the Global South to design safety-by-design AI systems; (viii) a deliberate slowdown of frontier AI development, coupled with greater lab transparency, is advocated to manage existential risk; (ix) investors’ influence on AI safety has diminished as leading firms move toward IPOs [231-236]; and (x) dedicated funding is needed to embed safety architecture from the design stage [84-86][254-255].
Concrete actions identified: AI Safety Connect will continue its semi-annual convenings and plan a fourth edition at the UN General Assembly [10-15][264-265]; the OECD is considering expanding coordinated transparency and incident-reporting frameworks, with the possibility of evolving toward an international AI Incident Response Centre [91-96]; the OECD has launched an open-call for open-source safety and evaluation tools, now hosted in the OECD.ai catalogue [98-99]; Singapore will publish a refreshed AI safety research priority list and advance testing-tool development within twelve months [240-249]; ASEAN will operationalise its AI Safety Network, building enforcement agencies and sustaining political will over the next 12-18 months [152-173]; the World Bank will deepen collaborations with high-capacity economies and tech firms to provide red-team and safety-by-design support to Global South clients [176-183]; and all participants called for dedicated funding to embed safety architecture from the design stage [84-86][254-255].
Unresolved issues remain, notably the precise legal authority and governance model for an international AI incident-response centre, how to create enforceable interoperable standards without imposing excessive compliance burdens, mechanisms for effectively involving private investors in safety governance, sustainable financing for regional bodies such as the ASEAN AI Safety Network, and the detailed trade-offs between safety measures and innovation speed for smaller states lacking jurisdiction over AI origins [91-96][84-86][228-230][152-173].
Suggested compromises include: implementing targeted, temporary pauses in AI development to allow testing and auditing before scaling; adopting a phased, coordinated incident-reporting system that protects companies from legal or commercial penalties while sharing near-miss data; leveraging middle-power pooled resources to bridge gaps between fast-moving AI labs and slower policy cycles; combining national capacity-building with regional enforcement mechanisms (e.g., ASEAN) to balance sovereignty with collective safety; and conditioning continued investment and market access on greater transparency from leading AI labs, thereby aligning private incentives with safety goals [84-86][91-96][62-66][157-166][207-226].
Overall, the summit underscored that the coordination gap in frontier AI safety is urgent but can be narrowed through inclusive global governance, transparent incident-reporting, capacity-building partnerships, and, where necessary, a calibrated slowdown of development. The convergence of views across policymakers, regional leaders and technical experts provides a solid foundation for concrete policy initiatives in the coming year. Manzar concluded by urging that AI safety be framed as “saving people from AI”, emphasizing the need to protect human intelligence through robust ethical and policy safeguards [266-277].
that the race towards artificial intelligence is no longer a theoretical pursuit. As billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence, the technology is now advancing rapidly. And safety is not keeping pace with it. There are wonderful opportunities on the other side of this quest. There are also big risks. And so that’s the purpose, that’s the reason AI Safety Connect was founded. AI Safety Connect is there to help shape the frontier AI safety and secure agenda towards what I would frame as commonsensical AI risk management. AI Safety Connect has been founded to encourage global majority engagement into frontier AI safety. And AI Safety Connect, has been connected to showcase Concrete’s governance coordination mechanisms, tools, and solutions.
So how we do this? We convene at each AI summit. So last year we started in Paris, this year in India, next year we’re going to be in Switzerland. But we also convene at the UN General Assembly, right? We need a faster tempo for these safety discussions, so every six months we have this global convening. We also do capacity building, and we also do trust building exercises at times behind closed doors. Well, this week in New Delhi has been an intense one, an impactful one. On Tuesday we had a full day of panels, conference, solution demonstrations, and closed -door workshop discussions on some specific nuts to crack to advance AI safety. We, for example, at the privilege of, hosting Prime Minister Dick Schuh from the Netherlands on stage to deliver a special address on the role of top leadership in advancing AI safety.
We also engage with industry, engage with academia. of India and abroad. So we’re an extremely busy week beside our main event. We had this closed -door discussion that I was mentioning yesterday and today, this closed -door scientific dialogues. We’re going to publish the results soon that brought together senior industry leaders to discuss shared responsibility for AI safety. Well, obviously, none of this would happen without partnership. And we want to thank our co -hosts, the International Association for Safe and Ethical AI and its director, Professor Stuart Russell, to whom I will hand over the floor in a few minutes, and the Digital Empowerment Foundation who is anchoring us at the grassroots here with Osama Manzar,. We’ll close the session later on.
And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moderate that panel and we’re thankful for that. The Future of Life Institute, Ima and Yann, who’s been supporting this effort, and the Mindero Foundation, whose team is here as well with team. And it’s great to have your support and we are thankful for that. So today we’re about to hear from His Excellency Matthias Korman who’s the Secretary General of the OECD We’re going to hear from Her Excellency Minister Josephine Theo who’s the Minister for Digital Development and Information at the Government of Singapore. Thank you for your continuous support, really appreciate that Same for Jann Tallinn who’s the AI investor but also a founding engineer at Skype and the co -founder of the Future of Life Institute And last but not least, we also have Minister Teo who’s going to be with us from Malaysia Minister for Digital Development and Information Thank you Minister as well as Vice President Kim for Digital and AI at the World Bank So an extremely important conversation to have And before we welcome you to the stage I would like to hand over the floor to Professor Stuart Russell to say a few words and to speak about also what’s happening next week in Paris Thank you so much.
Thank you very much, Cyrus and Nico. So as Nico mentioned, the International Association for Safe and Ethical AI, or ICI, the world’s worst acronym, is a global, democratic, scientific and professional society. We have several thousand members and approaching 200 affiliate organizations. Our mission is to ensure that AI systems operate safely and ethically for the benefit of humanity. And as Nico mentioned, our second annual conference will take place in Paris starting on Tuesday. It’s still, I think, possible to register, but we’re already up over 1 ,300 people coming. It’s at UNESCO headquarters in Paris. Thank you. So achieving this mission of ensuring… that AI systems operate safely and ethically is partly a technical challenge. How do we even build systems that have that property?
But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this panel is mainly about this second challenge. And I think it’s one on which global coordination is essential because the harms, whether it’s psychological damage to the next generation or loss of human control altogether, those harms cross borders. And we must coordinate to make sure that they don’t happen or they don’t originate anywhere. And it’s, I think, fitting that we are having this summit here in India, which has really, among other things, championed the idea that everyone on Earth should have a say. And so with that, I will hand over to Eileen. Thank you very much.
Thank you, Stuart. So Dr. Eileen Donahoe is the founder and managing partner of Sympathico Ventures. She’s also the former U .S. Special Envoy and Coordinator for Digital Freedom and Ambassador to the UNHCR. Eileen? Welcome the speaker on the floor. Please, Your Excellency, Mr. Mattias Korman, Mr. Gobind Singh Deo, Mr. Josephine Teo, and Mr. Jann Tallinn, as well as Mr. Sangbu Kim, join us on stage.
Okay. Given this remarkable panel and the very short time we have, let me very briefly frame our discussion and get right to our speakers. So we’re here to share. Views on the opportunity for policymakers to impact international AI governance. As the race towards AGI and superintelligence intensifies, AI safety advocates face a compounding challenge. The technology is advancing rapidly and being deployed with minimal guardrails, while the risk management processes that do exist are either ill -adapted to the magnitude of the risk, fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators. The result is an unharmonized governance landscape that fails to shape the behavioral incentives. Of those building and funding frontier AI. Economies, governments, and societies do not respond well to such mixed signals.
While much of the discourse on frontier AI safety has focused on AI superpowers, there’s an urgent need for deeper international diplomacy on the most… extreme risks. At this juncture, middle powers and global majority states can’t be seen as peripheral actors in this landscape. Through pooled resources, market leverage, normative influence, and regulatory innovation, they can shape the direction of global AI practices and safeties. Leading from the middle may turn out to be a more powerful approach than previously anticipated. Whether or not that collective power is exercised now will determine whether international AI governance moves from the rhetorical level to the real -world impact on safety. This panel will aim to identify present -day coordination gaps in the global AI practice and the global market.
We will also look at the role of global AI in international AI safety and highlight practical steps policymakers can take in the coming months to close them. So to our panel, I’ll start with Secretary General Corman. The OECD has done remarkable work over the past decade, developing consensus on the OECD principles, providing a definition of AI systems that has resonated internationally, and playing an international role in operationalizing the Hiroshima International Code of Conduct. Along with those foundations, we now have the International AI Safety Report and the Singapore Consensus on Global AI Safety Research Priorities. With these principles, definitions, and frameworks in mind, two -part question for you. First, what are the key lessons learned from the process of building consensus and then implementing these frameworks?
And then second, looking ahead, what’s the most critical? What’s the most critical piece of coordinated frontier AI safety infrastructure we should be building now? Some have called for an international incident response center, and we’re all curious whether you think that should be a priority and achievable. Just some small, easy questions.
In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is built through inclusion and on the basis of objective evidence. And, you know, I think what we’ve learned over the last few years is that bringing together all the relevant actors, governments, companies, civil society, technical experts, is what we need to do. I mean, each has a different perspective and different imperatives. I mean, markets reward the private sector for speed, scale, and innovation. While governments must manage risk and protect the public interest without stifling progress. But a challenge, and it’s been mentioned in some of the opening remarks, a challenge for policymakers in this context is that AI is moving much faster than policy cycles have traditionally moved, which easily then creates gaps between innovation and progress and opportunity, but necessary oversight, mitigation and management of risk.
But all sides in this conversation do share an essential common interest, and that is to ensure that the systems that are developing are trustworthy, because without public trust in the end, even the most powerful AI tools will struggle to gain broad adoption. So that means that occasionally, and it’s not always popular with everyone, but occasionally we should slow down. Occasionally we should actually pause. Pause, test, monitor, audit, share information, and take the time and invest in building confidence that these systems can work as intended and respect fundamental rights. So that’s sort of, I guess, the first point. another critical lesson involves international consistency and this is part of the reason why these sorts of summits are so important is to really facilitate these conversations among countries and among different jurisdictions because national priorities can vary quite widely and there’s of course fragmentation and compliance cost related risks and at the OECD really what we’ve been doing for six decades now across different policy areas is to try and reduce fragmentation and by achieving alignment around key principles, building shared evidence and facilitating the necessary conversations to develop a more coherent better coordinated approach moving forward and on AI I mean we’ve developed the OECD principles which were first adopted in 2019 and which are now adhered to by 50 countries around the world and that was really the first globally recognized baseline for trustworthy AI The OECD’s lifecycle definition of an I .I.
system has since shaped policy frameworks from the EU I .I. Act to U .S. executive orders. And we’ve had just earlier the meeting of the Global Partnership on I .I. co -chaired by Korea and Singapore. We’ve got the OECD I .I. Policy Observatory, which is sort of essentially the broad gamut of all of the different policy approaches around the world to provide countries and industries with data and evidence on what’s being done, facilitating peer learning, and trying to take some of the politics and the rhetoric out of it, but really looking at the facts. Now, looking ahead, and you sort of ask a question here about what to do about the risk. I mean, the most critical piece of frontier I .I.
safety infrastructure is coordinated. transparency and incident reporting. I mean, the Hiroshima I .I. Process Code of Conduct and its reporting framework launched at the I .I.’s Action Summit in Paris last year. You know, that’s a promising step, and we’ve got to continue to develop that. Since their publication, 25 organizations across nine countries have already submitted detailed reports on how they manage I .I. risks, offering for the first time a comparable view of developer practices across jurisdictions. The next stage is to strengthen information sharing on I .I. failures and near misses. The GPI I .I. Common Framework for Incident Reporting aims to help us collectively learn from mistakes before they scale globally, and over time, this could evolve into an international I .I.
Incident Response Center, coordinating alerts between governments and labs without exposing companies to commercial or legal penalties for reporting in good flight. Finally, we do need to scale access to practical safety tools. With global partners, the OECD recently launched an open call for open source safety and evaluation tools hosted in the OECD .ai catalog of tools and metrics to make a trustworthy AI easier to implement in practice. I mean, these are some initiatives to form the foundation of a more transparent, data -driven, and interoperable AI governance ecosystem, and
Excellent. Minister Teo, a number of questions for you, but let me start with the fact that Singapore occupies a very distinctive position in the global geostrategic landscape as a pro -innovation, advanced knowledge economy, with deep commercial and diplomatic ties to both the U .S. and China. Thank you. As the race to AGI intensifies and bilateral tensions mount, is there a role for Singapore and other middle powers to play in bridging the coordination gap to keep scientific and safety channels open? And also, what’s the most important step middle powers can take in the next 12 months to help establish a shared minimum understanding of frontier safety?
Well, thank you very much for that question. I think there is no running away from the fact that for smaller states, and that includes Singapore, the technology that our companies, our citizens are going to rely on do not originate from our shores. So they don’t necessarily come within our jurisdictions. We don’t always get to set the rules. Having said that, I do believe that we’re not without. Thank you. agency. It doesn’t mean that we take a step back and just let things happen to us. There are still things that we can do. One of the most important things I think as policymakers is for us to think about what it takes to translate what we know from science into policy.
And I wanted to just say why this is so important. In our case, as policymakers, the key questions will always be, are the policies that we make effective? And also, policies always come with trade -offs. With the question of effectiveness, there is always a need to understand what actually works, as opposed to what looks good on paper. With the question of trade -offs, it’s about understanding what we lose as a result of whatever safety aspects it is that we choose to put in place. And whether we can minimize them, can we mitigate them? Now, in areas where safety is the objective, we can’t just go with gut. We can’t just go with speculation. You take, for example, in my previous life, I was working on promoting Singapore’s Air Hub.
And we had to deal with a question of aviation safety. We were expanding our airport. It was going to carry many more passengers in and out of the country. But we are limited by number of runways. And in landscape Singapore, you can’t just click your finger and say, let’s build a new one. It’s a long runway. It’s very expensive anyway. Then there is the question of what do you do when you have these jumbo jets like A380s? Because each time an A380 hits the runway. It creates so much of a blast that you really need to create more distance between the A380 taking off and the next aircraft that is scheduled to take off.
Now, this is not a question that the transport minister can just decide on a whim. The air traffic management has to decide on its policy of how much distance is considered safe between landings or rather between takeoffs. And to answer this question, you really need to invest in the research. You need to invest in understanding the tests. So the science is one part of it. But between science to policy, you are actually going to need a lot of time. You are going to need a lot of tests. You are going to need a lot of simulations. you need to understand whether the distances that you decide are safe works well in a thunderstorm, a tropical thunderstorm.
Does it work just as well in a snowstorm? Well, we don’t have snow in Singapore. But you think about the airline that operates this. If each country that they fly into has a different safety distance, that creates some difficulty. So we therefore think that not only is there a need to invest in understanding the science, not only is there a need in understanding what testing looks like, what good testing looks like, there is also a need for us to think about what standards that will eventually be interoperable, what do they look like, which is why we think that international efforts, the collaboration that… that is being carried forward by the OECD through the Global Partnership on AI, the AI Safety Connect effort, and also ICI.
Where is Stuart now? Those kinds of efforts, you can’t do away without. At the outset, there is likely to be a bit of a fragmentation. And the trade -off with not having these conversations is that we are not even going to make advances in AI safety. And I don’t think that that’s a very good place for us to be in. It doesn’t give us the assurance that we can deliver to our citizens. And it does not create a foundation of trust that will eventually help us to push ahead with the use of this technology on a wider scale. So that’s how we are thinking about it, Aileen. Thank you.
So let me turn to Minister Gobind from Malaysia. and note that under your leadership and Malaysia’s 2025 ASEAN chairmanship, Malaysia succeeded in placing AI at the center of ASEAN’s agenda by establishing the ASEAN AI Safety Network. Malaysia is now finalizing its own AI National Action Plan, and Malaysia’s AI Governance Bill is expected in Parliament in 2026. So this dual -track approach of building national capacity while leading regional coordination represents a model of middle power agency that other countries are watching closely. So what lessons do you think other middle powers can draw from Malaysia’s experience? And on the ASEAN AI Safety Network, we have to note that operationalize and it will require sustained political will. technical capacity and resources.
So what concrete steps must ASEAN take in the next 12 to 18 months to ensure that this isn’t just aspirational?
Online fraud, for example, scams, you have deepfakes today, you have huge concerns about certain vulnerable groups that are going to be impacted, children, older folk and so on and so forth. So this is something that stretches across the region. How do we deal with it in a coordinated way and ensure that the conversation doesn’t just stop with the government of the day, but it’s a conversation that expands over a period of time with clear policies that we can actually execute. The second layer that I think we need to think about is in the event there’s a need for execution. When we speak about risks in AI and we speak about how we’re going to govern these risks, we often talk about standards.
We often talk about regulation. We even speak about legislation at times for areas that pose higher risks. But ultimately, it really comes back down to you making sure you have an agency that can enforce it, because you can have the best standards. regulations and legislation but if there is no institution that’s really able to implement those standards to ensure that they are properly implemented and also to ensure that rules for failure to implement are enforced then those standards regulation and policies are really going to be just strong on paper but they’re not going to really have that impact that you need. So again, how do you build this mechanism across ASEAN where every country strengthens themselves domestically first and then moves across to the ASEAN member states and hopes to learn from their experiences so that we can together move ahead in this new world of AI and I think the threats that we anticipate in future.
Now the third part which is really important is also ensuring that whilst this goes on, you create those policies, you have institutions that enforce and the discussions persist at an ASEAN level. I think what is important is also to have that expertise looking at what comes next. We must make sure that our countries are prepared for the risks that are to come with the next generation technology. This is important because you don’t want a situation where new technology is adopted and there are risks that come with this new technology, you’re not prepared. I think that’s something we want to avoid and that’s the reason why I come back to where I started off. We really need to look at building institutions that have the expertise and of course are able to sustain as we go along and to build and deliver something that’s impactful.
Sorry, but that’s in short what we’re doing in Malaysia today.
Excellent. Thank you so much. Okay. Let me turn to Vice President Kim and talk about the World Bank, which has been at the forefront of digital public infrastructure, helping countries leapfrog legacy systems. We note that frontier AI systems, though, are arriving in the global south under very different conditions from previous waves of technology and governments are under pressure to deploy AI systems quickly. often using models that haven’t been adequately tested, let alone certified for their context, languages, or risk tolerances. So how can the World Bank help Global South countries move from being passive recipients of frontier AI to active shapers of safety and reliability requirements before the systems are deployed at scale?
Thank you. In one word, definitely we need to make our clients well prepared from the scratch. When they design the AI systems, definitely they need to design the safety architecture within the system. That’s very, in general, that’s very correct. But real challenge is that… nobody can really expect a new type of new threat especially our some countries in a low capacity it is really hard to figure out what that will be so that’s the in order to tackle that type of irony and dilemma we need to very closely working with very developed economies company and government and very high end examples so that we can really well connect those good examples to the developing world so one partnership is one of the good examples we are helping our country for example some big tech company who is running some red teams so that you they are trying very hard to attack their system in advance by fully utilizing AI.
So through that type of practice and experiment, they can learn how to prevent the AI attack in the future, which is pretty much possible. So in this way, it is inevitable for our developing countries to keep track on the new trend and new innovation, even in this safety protection area. It is the only way. So I have to admit that this constraint. But think about this. Some anecdotal story in East Asia, in China and in Korea, there’s two models. Merchant who is selling two products. Number one is. sphere. And then they keep saying that this sphere is so strong so it can get through any kind of shield. So this is one vendor. The other vendor is selling shield.
And then they are saying that this shield is one of the most safe and strong shields. No sphere can get through this shield. This is exactly an ironical situation. If you think about AI, AI attack is the sphere. AI is so strong and smart and really capable so it can get through and hack any system with high -end intelligence and knowledge. But good news is that on the other hand, we also can build strong protective systems. by fully utilizing AI. So this is one good news, but the constraint is that we do not clearly know how AI can really evolve to fully protect those big attacks in the future. So in order to solve this type of ironical situation from the developing world point of view and from the World Bank point of view, this is the only way to very closely work and collaborate and learn from the advanced technology and advanced company and advanced country.
Thank you so much. Last but not least, Mr. Jan Tallinn, you occupy a very rare position in this landscape as a founding engineer of Skype, an early investor in DeepMind and Anthropic, and you’re also the co -founder of the Future of Life, which last October released a statement on superintelligence. calling for prohibition on superintelligent development until two conditions are met. Number one, broad scientific consensus that it can be done safely and controllably, and second, strong public buy -in. Let’s just ask the hard question. What would an effective prohibition look like in practice? How could that work?
Thank you very much. So I think I’m kind of like a little bit different from the people on this panel. And that too, I guess. That I’m kind of, my main kind of threat vector about, my main worries about future are less about like how AI is being deployed and diffused and taken into practice. And I’m way more worried about what is happening in the labs, in the top AI companies. I’m not sure what the future is going to look like. because they are now in a cutthroat race to build something that is smarter than they are. They are in a cutthroat race to build superintelligence. And, like, I mean, we just saw yesterday the picture where, with a photo of it, Narendra Modi, Dario Amadei, and Sam Altman refused to link hands.
I mean, this is, like, indicative. We also saw both Dario and Demis Hassabis call for a slowdown in Davos last month. They just can’t do it alone. And I think there are, like, two reasons why it’s, like, an unfortunate situation. One is that the U .S. as a country is conflicted. They basically rely on AI for their economic and competitive power. So they are, like, very hesitant to, kind of, meddle with now. cutthroat situation in AI companies and the rest of the world really doesn’t understand how big danger they are now. So it’s part of the reason why we did the superintelligence statement is to create awareness that there is increasing political demand to do something about this situation.
We now have more than 130 ,000 signatures which is like many times more than we had done our original six months post letter had in 2023. So yeah, that’s like if there was enough pressure, I think clearly like the rest of the world is still kind of more powerful than the kind of leading AI countries. There are more people, there’s more economic power, etc. So if there was like enough pressure this could be solved. Like the way I put it is that it’s super hard to do like a $10 billion project. it’s impossible to do it if it’s illegal. So having these trillions flow into AI actually makes it easier to govern than harder.
So I’m tempted to follow up with a question about investors and their potential role in this. They are obviously playing a decisive role in shaping the incentives, but they’re largely absent from the governance conversation. So what would it take to bring investors meaningfully into the safety conversation?
So, yeah, I think the answer is kind of simple. I don’t think investors play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them. They will now IPO soon. And if you are like an IPO market, there is… like, like, so level playing field, which means that like, if somebody’s not funding, somebody else will. So I don’t think investors, investors could have affected things, but like, five, 10 years ago.
Great. Okay, so since we’re running short on time, I’m going to ask one question, and ask you all to answer it, which is about the 12 month window. Oh, the very shortly, each shortly. Many in the AI safety community believe we have a narrow window, perhaps 12 to 24 months before frontier AI capabilities advance beyond our ability to evaluate and govern them. So what would you recommend is prioritized between now and we’re basically in the next year to two years, each of you to enhance safety? and security?
I think there are two, really. I think the AI safety research priorities need to be refreshed because the field has moved so quickly. The Singapore consensus identified a set, but as soon as they are published, we recognize that they will be out of date. So we need to refresh it. That’s why we’re going to have the second edition, you know, worked on. Hopefully in a few months. The second thing I think is that we can’t just keep thinking about frameworks, you know, and guidelines. At some point, we need to be able to introduce better testing tools. And until we are able to do so, the companies that are developing and deploying AI models, they also don’t have a very practical way of giving assurance.
So I’d like to see in the next 12 months some further advancements. In those two areas.
I’ll be really quick I know there’s always a temptation in these sorts of conversations, what is the one thing that can sort of fix it all and the truth is there’s not one thing we’ve got to go as fast as we can to play catch up to a degree but we’ve also got to go as comprehensive and as deep as we can there’s just no alternative, there’s catch up to be played, we’ve got to put a real effort and it’s got to be right across the board and I don’t think that you can just say there’s the one thing that will make us all safe and it’s going to be okay.
Minister Gobind?
I think as I said earlier, we need to start thinking how we can build structures and perhaps institutionalize this entire conversation about building security around AI and its governance in this regard, we have to understand that things are going to move very quickly and you’re going to see new technology develop very fast which brings new risks as well, so in that regard, you’ve got to build something that’s sustainable and I think in order to do that, institutionalizing it should be a priority.
everyone is really rushing for ai system development ai solution development that means ai is currently ai safety measures currently under invested so i really like to urge all of us to think about this is not free you know things we need to spend some money to protect the system in advance from the scratch when you design the system so that means we should allocate some money to fully invest in in the
Jann Tallinn?
so slow down we really need to slow down that the companies are asking for it and if we like instrumental to that would be basically transparency like more people should know what the leaders of ai companies know in order to basically understand how crucial the slowdown now is
okay great well i believe we have a little bit of a close coming and thank you all so much i wish we had had a day to talk about all of these issues. But thank you so much. Thank you very much.
Thank you very much, Eileen, and this fantastic panel, excellencies, colleagues, friends. What we’ve heard today confirms something important. The coordination gap frontier in AI safety is real, and it is urgent. And as we’ve discussed today, it is closable. And before I hand over the floor to Osama Manzara to close off for a few minutes of remarks and reflection, I’d like to invite you all to the United Nations General Assembly next edition in New York, where we hope to organize the fourth edition of AI Safety Connect, and hopefully with many of the great policymakers and leaders we have heard from today, to carry forward that collective effort. Osama, the floor is yours.
Well, thank you very much. And we are one of those absentee co -organizer in this one. So, you know, because being a local, but I just want to I mean, apart from thanking each one of you who didn’t get up and, you know, go out of the room. And every one of you who gave all the safety remarks before usage of AI on behalf of 40 million people that we have reached out in the last 23 years. And billions of the other people whom we are going to work for. I want to suggest that the entire safety aspect of AI should be more from please save people from AI. Right. Because that’s the safety like it’s a car on the road.
You know, we have to save people before you teach people how to think. So we also have to keep a very, very strong thing. How do we save human intelligence from artificial intelligence? And how do we inbuilt in the safety guards and all the ethics and all the all the, you know, policy playbooks? Thank you very much. Thank you. Bye. Thank you. Thank you.
But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this panel is mainly about this second challenge. And I think it’s one on which globa…
EventRussell argues that global coordination on AI safety is essential because the potential harms, whether psychological damage to future generations or loss of human control, cross national borders. He e…
EventBoth the US and China have significantly contributed to the AI ecosystem, with collaboration between scholars from both countries. There is a recognition of the need for cooperation, particularly on s…
EventGlobal cooperation and dialogue is needed to build common frameworks
EventBuilding trust through transparency, incident reporting and standards
Event– **Defining and protecting core Internet resources during conflicts**: The discussion centered on clarifying what constitutes “core Internet resources” and establishing roles and responsibilities for…
EventNational and regional IGFs (177+ initiatives) represent successful organic multi-stakeholder model that should be recognized but not made hierarchical or prescriptive
EventCapacity Building and Regional Cooperation Referenced non-paper on role of regional organizations with examples from ASEAN, African Union, ECOWAS, EU, Pacific Island Forum, OAS, and OSCE Switzerland…
Event– **Strengthening National and Regional Initiatives (NRIs)**: Discussion of better integrating local and regional IGF initiatives with the global forum through improved communication channels, ensurin…
EventMark Carvell: Okay having covered a message about the IJF in particular which is one area of focus for the review of course. The third one is again on the conduct of the review so I’ll read it out. Th…
EventAt the same time, they have exposed and in some cases deepened inequalities and new divides both between and within countries and among different segments of society. In this context, Mongolia emphasi…
EventDiscussion point:Investment Risk and Market Dynamics
EventBitcoin experienced a 6% drop on 27 January, as stock markets reacted to the debut of China’s open-source AI model, DeepSeek, which some have dubbed ‘AI’s Sputnik moment.’ The new model developed on a…
UpdatesTaneja describes the current investment landscape as a mix of caution towards overvalued companies from the COVID era and enthusiasm for AI-driven innovations. He notes that this creates a complex env…
EventRather than developing a framework of risks linked to general and thus cross-national assessments, it is therefore essential to understand the risks of implementing our specific n…
ResourceOver 100 prominent AI researchershave signedan open letter urging generative AI companies to grant investigators access to their systems. The researchers argue that strict company policies are impedin…
Updates“Nicolas Miailhe warned that the race toward artificial intelligence has shifted from a theoretical pursuit to a massive, financially‑driven endeavour, with billions – possibly trillions – of dollars being poured into frontier AI research while safety measures lag behind.”
The knowledge base notes massive compute investment driven by the race to be first and highlights the scale of AI investment worldwide, confirming the claim of a large financial push and accompanying safety concerns [S72] and [S73].
“The shift toward massive AI investment is driven by competition to be first, though efficiency improvements may reduce compute requirements.”
Additional nuance is provided: while investment is huge, future efficiency gains could lessen the need for such large spending, adding detail to the claim about billions-trillions of dollars [S72].
“AI Safety Connect convenes semi‑annual global meetings at major AI summits and the UN General Assembly to accelerate safety discussions, build capacity and conduct closed‑door trust‑building exercises.”
The source states that the group meets at each AI summit and also convenes at the UN General Assembly, with a six-month cadence for global safety discussions [S22].
“Stuart Russell argued that the governance challenge of ensuring only safe systems are built requires coordinated international action because the harms cross borders.”
The knowledge base explicitly mentions that the governance challenge needs global coordination because harms cross borders [S86].
“The New Delhi week hosted Prime Minister Dick Schuh of the Netherlands for a special address on leadership in AI safety.”
The UN General Assembly agenda lists Prime Minister Dick Schoof of the Netherlands speaking, confirming the presence of a Dutch prime minister at the event [S79].
“The report misspells the Dutch prime minister’s name as “Dick Schuh”.”
The correct spelling, according to the official UN record, is “Dick Schoof” [S79].
The panel shows strong convergence on several core themes: the need for coordinated global governance with multi‑stakeholder inclusion, transparency and incident‑reporting mechanisms, capacity‑building partnerships, periodic pauses or slowdowns in development, institutionalised governance structures, and an active role for middle‑power and regional actors. Additionally, there is agreement on updating research priorities and providing practical testing tools.
High consensus across most speakers, indicating a shared understanding that coordinated, inclusive, and transparent mechanisms—supported by capacity‑building and institutionalisation—are essential for safe AI development. This broad alignment suggests that concrete policy initiatives (e.g., incident‑reporting frameworks, middle‑power coalitions, and capacity‑building programs) have a strong foundation for international adoption.
The panel shows broad consensus on the need for stronger coordination, inclusive governance, and capacity building, but diverges on the primary levers: Jann pushes for a deliberate slowdown and greater lab transparency, whereas others (Mathias, Stuart, Eileen) prioritize building coordinated incident‑reporting systems, multi‑stakeholder trust processes, and middle‑power diplomatic engagement. A notable unexpected split concerns the role of investors, with Jann deeming them irrelevant and Eileen urging their involvement.
Moderate to high. While participants share the overarching goal of safer AI, they differ sharply on strategic priorities (slowdown vs infrastructure) and on who should drive change (investors vs governments/middle powers). These divergences could impede the formulation of a unified policy agenda unless reconciled through compromise mechanisms.
The discussion was driven forward by a series of pivotal remarks that reframed AI safety from a purely technical problem to a geopolitical and institutional challenge. Stuart Russell’s emphasis on cross‑border harms set the stage for Eileen Donahoe’s middle‑power narrative, which was then fleshed out through concrete examples from the OECD, Singapore, Malaysia, and the World Bank. The most significant turning points were Mathias Cormann’s call for coordinated transparency and incident reporting, Josephine Teo’s analogy linking scientific evidence to policy, and Jann Tallinn’s provocative proposal of a prohibition on superintelligence development. These comments shifted the tone from descriptive to prescriptive, prompting participants to focus on enforcement mechanisms, short‑term actionable priorities, and the limits of investor influence. Collectively, the highlighted comments shaped a consensus that immediate, inclusive, and institutionally backed coordination—especially around incident reporting and testing tools—is essential within a narrow window before frontier AI capabilities outstrip governance capacities.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

