Policymaker’s Guide to International AI Safety Coordination

20 Feb 2026 17:00h - 18:00h

Policymaker’s Guide to International AI Safety Coordination

Session at a glanceSummary, keypoints, and speakers overview

Summary

Nicolas Miailhe opened by noting that the AI race has moved from theory to massive investment, with billions-trillions being poured into development while safety research lags behind [1-3]. He explained that AI Safety Connect was created to mobilise global-majority engagement, convene semi-annual summits at AI conferences and the UN, and run capacity-building and closed-door trust exercises [6-8][11-15].


Stuart Russell described the International Association for Safe and Ethical AI as a worldwide scientific society of thousands of members that aims to ensure AI systems operate safely and ethically, and he highlighted that achieving this requires both technical solutions and coordinated governance [33-38][40-44]. He stressed that AI-related harms cross borders, making global coordination essential, and pointed to India’s summit as an example of inclusive international dialogue [44-46].


Eileen Donahoe framed the panel by stating that rapid AI progress is outpacing minimal guardrails, creating a fragmented, non-binding governance landscape, and argued that middle-power and majority states can leverage pooled resources and normative influence to shape global AI safety [56-61][62-66]. She added that the panel would identify present coordination gaps and propose practical steps for policymakers in the coming months [66-68].


Mathias Cormann of the OECD identified inclusion of all stakeholders and evidence-based trust as key lessons, and warned that policy cycles are too slow for the pace of AI innovation, urging occasional pauses for testing and auditing [77-84][85-88]. He argued that the most critical frontier-AI safety infrastructure is coordinated transparency and incident reporting, citing the Hiroshima Code of Conduct and the emerging Global Partnership on AI incident-reporting framework as steps toward an international response centre [91-96].


Singapore’s Minister Josephine Teo noted that smaller states depend on foreign AI technologies, so translating scientific knowledge into effective policy requires rigorous testing, standards and international collaboration through bodies such as the OECD, AI Safety Connect and ICI [103-110][111-119][140-144]. Malaysia’s Gobind Singh Deo highlighted the ASEAN AI Safety Network and a forthcoming AI Governance Bill, emphasizing that without enforceable agencies, standards and regulations remain ineffective and must be institutionalised across the region [152-158][162-166][167-173].


World Bank Vice-President Sangbu Kim said the Bank can help Global South countries embed safety architecture from the design stage by partnering with advanced economies and firms to share red-team practices and build capacity [178-184][185-200]. Jann Tallinn warned that the most pressing risk is the unchecked race for superintelligence, calling for a slowdown supported by transparency and noting that private investors now have little influence over the leading AI firms [210-218][221-227][231-235].


Nicolas concluded that the coordination gap in frontier AI safety is real and urgent but can be closed, inviting participants to the next AI Safety Connect at the UN General Assembly to continue collective action [260-264].


Keypoints

Major discussion points


Rapid AI progress outpaces safety and policy, demanding urgent global coordination.


Nicolas opens by noting the “race towards artificial intelligence is no longer a theoretical pursuit” and that “safety is not keeping pace” [1-4]. Stuart Russell stresses that AI harms “cross borders” and require coordinated governance [44-46]. Eileen Donahoe describes a “fragmented…risk-management landscape” that fails to shape incentives [57-59]. Mathias Cormann adds that “AI is moving much faster than policy cycles” creating gaps [82-84].


Middle-power and global-majority states can lead AI governance through pooled resources, normative influence, and regional networks.


Donahoe argues that “middle powers…can shape the direction of global AI practices” and that their collective power will determine whether governance moves beyond rhetoric [62-66]. Cormann highlights the need for “inclusion…objective evidence” and notes the OECD’s success in building consensus among many countries [77-80]. Singapore’s Minister Teo stresses translating science into policy and the importance of interoperable standards, while Malaysia’s Minister Gobind points to the ASEAN AI Safety Network as a model for regional coordination [103-110][152-156].


Concrete infrastructure proposals: transparent incident reporting, an international incident-response centre, and open-source safety tools.


Cormann identifies “coordinated transparency and incident reporting” as the most critical frontier-AI safety infrastructure [91-92]. He describes the GPI AI Common Framework for Incident Reporting and the prospect of an international Incident Response Center [95-97]. He also mentions the OECD’s open-source safety-tool catalogue to make trustworthy AI easier to implement [98-99].


Building institutional capacity, standards, and enforcement mechanisms is essential.


Teo uses the aviation-safety analogy to illustrate the need for rigorous testing, standards, and long-term research before policies are set [110-119][132-138]. Gobind emphasizes that standards and regulations must be backed by agencies capable of enforcement, and that ASEAN needs sustained political will and technical resources [162-166][172-173].


Calls for a slowdown or even a provisional prohibition on super-intelligence development, and discussion of investors’ limited influence.


Cormann suggests occasional “pause, test, monitor, audit” to build public trust [84-86]. Jann Tallinn warns that the “cut-throat race” in labs is the biggest risk and cites the Future of Life Institute’s call for a prohibition until broad scientific consensus and public buy-in are achieved [207-214][226-227]. He later notes that investors now have little leverage over the leading AI firms [231-235].


Overall purpose / goal of the discussion


The panel was convened to diagnose the current “coordination gap” in frontier AI safety, highlight why middle-power and global-majority engagement is crucial, and outline concrete, near-term actions (incident-reporting frameworks, standards, institutional capacity, and possible slowdown measures) that policymakers can take within the next 12-24 months to make AI development safer and more trustworthy [57-66][91-99][240-250].


Overall tone and its evolution


The conversation begins with an urgent, almost alarmist tone about the speed of AI development and the lag in safety [1-4][57-59]. It quickly shifts to a collaborative, solution-focused tone as participants emphasize inclusive coordination, shared lessons, and concrete infrastructure [77-84][91-99]. Mid-discussion, the tone becomes more pragmatic, using analogies (aviation safety) and regional examples to stress the need for standards and enforcement [110-119][158-166]. Towards the end, a more cautionary and even admonitory tone emerges, calling for pauses, possible prohibitions, and highlighting the limited role of investors [84-86][207-214][256]. The closing remarks return to a hopeful yet urgent tone, reaffirming that the coordination gap is “real, urgent, and closable” [262-264].


Speakers

Speakers (from the provided list)


Gobind Singh Deo – Minister (Malaysia), leading Malaysia’s 2025 ASEAN chairmanship; involved in AI governance and ASEAN AI Safety Network. [S1]


Jann Tallinn – AI investor; founding engineer of Skype; co-founder of the Future of Life Institute. [S3]


Mathias Cormann – Secretary-General of the Organisation for Economic Co-operation and Development (OECD). [S5]


Sangbu Kim – Vice President for Digital and AI at the World Bank. [S6]


Stuart Russell – Professor of Computer Science, University of California, Berkeley; Director of the International Association for Safe and Ethical AI (ICI). [S8]


Nicolas Miailhe – Founder/CEO of AI Safety Connect; organizer of AI safety convenings and capacity-building initiatives.


Eileen Donahoe – Founder and Managing Partner of Sympathico Ventures; former U.S. Special Envoy for Digital Freedom and Ambassador to UNHCR. [S14]


Osama Manzar – Co-organizer (Digital Empowerment Foundation) for AI Safety Connect; involved in grassroots outreach. [S18]


Josephine Teo – Minister for Digital Development and Information, Government of Singapore. [S20]


Additional speakers (not in the provided list)


Cyrus – Host/moderator who introduced the session (mentioned in the opening remarks).


Dick Schuh – Prime Minister of the Netherlands (mentioned as a guest speaker delivering a special address).


Matthias Korman – (Same person as Mathias Cormann; already listed).


Stuart Russell – (already listed).


Other brief mentions: “Professor Stuart Russell” (already listed), “Osama Manzar” (already listed).


Full session reportComprehensive analysis and detailed insights

The session opened with Nicolas Miailhe warning that the “race towards artificial intelligence is no longer a theoretical pursuit” and that “billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence” while “safety is not keeping pace with it” [1-4]. He noted that AI Safety Connect was created to “help shape the frontier AI safety and secure agenda towards what I would frame as commonsensical AI risk management” and to “encourage global majority engagement into frontier AI safety” [6-8]. The event was co-hosted by the International Association for Safe and Ethical AI (ICI) and the Digital Empowerment Foundation, represented by Osama Manzar [11-15][12-13], and featured a special address by Prime Minister Dick Schuh of the Netherlands [9-10]. To achieve its aims, the organisation convenes semi-annual gatherings at major AI summits (Paris, India, upcoming Switzerland) and at the UN General Assembly, and also runs capacity-building and closed-door trust-building exercises [11-15].


Stuart Russell introduced the International Association for Safe and Ethical AI (ICI), describing it as “a global, democratic, scientific and professional society” with “several thousand members and approaching 200 affiliate organisations” [33-35]. He also joked that ICI is “the world’s worst acronym.” Russell framed AI safety as both a technical challenge (“how do we even build systems that have that property?”) and a governance challenge (“how do we ensure that those are the systems and only those systems get built?”) [40-42]. He stressed that harms such as psychological damage or loss of human control “cross borders” and therefore “global coordination is essential” [44-46].


Eileen Donahoe set the agenda by observing that the “race to AGI and superintelligence intensifies” while “the technology is advancing rapidly and being deployed with minimal guardrails” [56-57]. She argued that existing risk-management processes are “ill-adapted to the magnitude of the risk, fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators” [58-60]. Donahoe highlighted the strategic potential of “middle-power and global-majority states” to “leverage pooled resources, market leverage, normative influence and regulatory innovation” to shape AI safety, asserting that “leading from the middle may turn out to be a more powerful approach than previously anticipated” [62-65]. The panel’s purpose, she said, was to “identify present-day coordination gaps in the global AI practice and the global market” and to propose “practical steps policymakers can take in the coming months” [66-68].


Mathias Cormann (OECD) reflected on lessons learned from building consensus. He stressed that “trust is built through inclusion and on the basis of objective evidence” and that bringing together governments, companies, civil society and technical experts is essential because each “has a different perspective and different imperatives” [77-80]. He warned that “AI is moving much faster than policy cycles have traditionally moved,” creating gaps between innovation and necessary oversight [82-84]. Cormann advocated occasional “pause, test, monitor, audit, share information” to build confidence that systems respect fundamental rights [85-86]. Regarding infrastructure, he identified “coordinated transparency and incident reporting” as the most critical piece, citing the Hiroshima Code of Conduct and the emerging Global Partnership on AI (GPI) Common Framework for Incident Reporting, which already has 25 organisations submitting detailed risk-management reports [91-96]. He suggested that this framework could evolve into an “international AI Incident Response Center” that shares alerts without penalising reporters [95-97]. Cormann also announced an OECD open-call for open-source safety and evaluation tools, to be catalogued on the OECD.ai platform, thereby making trustworthy AI “easier to implement in practice” [98-99].


Singapore’s Minister Josephine Teo explained that smaller states “cannot set the rules” because the AI technologies they rely on “do not originate from our shores” [104-107]. Nevertheless, she argued that policymakers must “translate what we know from science into policy” through rigorous testing, simulations and interoperable standards. Using an aviation-safety analogy, she described how determining safe runway separation for A380s required “invest[ing] in the research… in the tests… in the simulations” and warned that differing national standards would create operational difficulties [110-119][132-138]. Teo concluded that “international collaboration through bodies such as the OECD, AI Safety Connect and ICI” is required to develop standards that are both scientifically sound and globally interoperable [140-144].


Minister Gobind Singh Deo (Malaysia) highlighted the ASEAN AI Safety Network as a concrete regional mechanism and noted Malaysia’s “dual-track approach of building national capacity while leading regional coordination” [152-156]. He warned that standards, regulations and legislation are ineffective without an “agency that can enforce it,” otherwise they remain “strong on paper but … not … have that impact” [162-166]. Deo called for sustained political will, technical capacity and resources to operationalise the network, and argued that ASEAN must first strengthen domestic institutions before moving to a collective regional framework [167-173].


Sangbu Kim, Vice-President for Digital and AI at the World Bank, described how the Bank can help Global South countries embed safety “from the design stage” by “partnering with advanced economies… and very high-end examples” to share red-team practices and build capacity [178-184][185-200]. He noted the paradox that AI is “the sphere” capable of penetrating any shield, yet “we also can build strong protective systems by fully utilizing AI,” underscoring the need for close collaboration between developing and advanced economies to stay ahead of emerging threats [196-199][200].


Jann Tallinn, co-founder of the Future of Life Institute, warned that the “cut-throat race” in top AI labs poses the greatest danger and called for a “slowdown” until two conditions are met: a broad scientific consensus that superintelligence can be developed safely, and strong public buy-in [210-214]. Tallinn illustrated the competitive climate with a recent photo of Narendra Modi, Dario Amadei and Sam Altman standing apart without linking hands, and noted that Amadei and Demis Hassabis had called for a slowdown at Davos [215-218]. He argued that massive funding streams could be leveraged as a lever for safety if public pressure is sufficient, but observed that “investors don’t play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them” as they head toward IPOs [221-227][232-235]. Tallinn reiterated the need to “slow down” and suggested that greater transparency about what AI leaders know would help create the political pressure required for a slowdown [256-257].


When asked to prioritise actions for the next 12-24 months, Minister Teo said the “AI safety research priorities need to be refreshed” because the field moves quickly, and that “we need to introduce better testing tools” to give developers practical assurance [240-249]. Cormann added that there is “no one thing that will make us all safe” and called for a “comprehensive” effort that catches up with innovation while deepening coordination [251-254]. Deo stressed the need to “institutionalise” AI-safety governance structures so they can keep pace with rapid technological change [253-255]. The panel collectively agreed that coordinated transparency, incident-reporting frameworks and the development of open-source safety tools are immediate priorities, while recognising that enforcement mechanisms and sustained institutional capacity remain open challenges [91-99][162-166][236-239].


Nicolas Miailhe closed by reaffirming that “the coordination gap frontier in AI safety is real, and it is urgent” yet “closable” [262-264]. He invited participants to the forthcoming UN General Assembly session in New York, where the fourth edition of AI Safety Connect will be hosted, hoping to continue the collective effort [265]. Osama Manzar concluded with a broader moral framing, urging that “the entire safety aspect of AI should be more from ‘please save people from AI’… we have to save human intelligence from artificial intelligence” and calling for strong safety guards and policy playbooks to be built into AI systems [266-276].


Overall, the discussion revealed strong consensus that AI risks are global and demand coordinated governance, inclusive evidence-based consensus-building, and robust capacity-building. Middle-power and regional actors were identified as pivotal levers for shaping standards, while concrete infrastructure proposals-transparent incident reporting, an international response centre, and open-source safety tool catalogues-were widely endorsed. Points of contention included the extent to which private investors can influence safety incentives, whether voluntary reporting or mandatory enforcement should dominate, and the preferred mechanism for slowing development (periodic pauses versus a provisional prohibition). These disagreements underscore the complexity of aligning diverse stakeholder interests into a coherent global AI-safety strategy.


Session transcriptComplete transcript of the session
Nicolas Miailhe

that the race towards artificial intelligence is no longer a theoretical pursuit. As billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence, the technology is now advancing rapidly. And safety is not keeping pace with it. There are wonderful opportunities on the other side of this quest. There are also big risks. And so that’s the purpose, that’s the reason AI Safety Connect was founded. AI Safety Connect is there to help shape the frontier AI safety and secure agenda towards what I would frame as commonsensical AI risk management. AI Safety Connect has been founded to encourage global majority engagement into frontier AI safety. And AI Safety Connect, has been connected to showcase Concrete’s governance coordination mechanisms, tools, and solutions.

So how we do this? We convene at each AI summit. So last year we started in Paris, this year in India, next year we’re going to be in Switzerland. But we also convene at the UN General Assembly, right? We need a faster tempo for these safety discussions, so every six months we have this global convening. We also do capacity building, and we also do trust building exercises at times behind closed doors. Well, this week in New Delhi has been an intense one, an impactful one. On Tuesday we had a full day of panels, conference, solution demonstrations, and closed -door workshop discussions on some specific nuts to crack to advance AI safety. We, for example, at the privilege of, hosting Prime Minister Dick Schuh from the Netherlands on stage to deliver a special address on the role of top leadership in advancing AI safety.

We also engage with industry, engage with academia. of India and abroad. So we’re an extremely busy week beside our main event. We had this closed -door discussion that I was mentioning yesterday and today, this closed -door scientific dialogues. We’re going to publish the results soon that brought together senior industry leaders to discuss shared responsibility for AI safety. Well, obviously, none of this would happen without partnership. And we want to thank our co -hosts, the International Association for Safe and Ethical AI and its director, Professor Stuart Russell, to whom I will hand over the floor in a few minutes, and the Digital Empowerment Foundation who is anchoring us at the grassroots here with Osama Manzar,. We’ll close the session later on.

And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moderate that panel and we’re thankful for that. The Future of Life Institute, Ima and Yann, who’s been supporting this effort, and the Mindero Foundation, whose team is here as well with team. And it’s great to have your support and we are thankful for that. So today we’re about to hear from His Excellency Matthias Korman who’s the Secretary General of the OECD We’re going to hear from Her Excellency Minister Josephine Theo who’s the Minister for Digital Development and Information at the Government of Singapore. Thank you for your continuous support, really appreciate that Same for Jann Tallinn who’s the AI investor but also a founding engineer at Skype and the co -founder of the Future of Life Institute And last but not least, we also have Minister Teo who’s going to be with us from Malaysia Minister for Digital Development and Information Thank you Minister as well as Vice President Kim for Digital and AI at the World Bank So an extremely important conversation to have And before we welcome you to the stage I would like to hand over the floor to Professor Stuart Russell to say a few words and to speak about also what’s happening next week in Paris Thank you so much.

Stuart Russell

Thank you very much, Cyrus and Nico. So as Nico mentioned, the International Association for Safe and Ethical AI, or ICI, the world’s worst acronym, is a global, democratic, scientific and professional society. We have several thousand members and approaching 200 affiliate organizations. Our mission is to ensure that AI systems operate safely and ethically for the benefit of humanity. And as Nico mentioned, our second annual conference will take place in Paris starting on Tuesday. It’s still, I think, possible to register, but we’re already up over 1 ,300 people coming. It’s at UNESCO headquarters in Paris. Thank you. So achieving this mission of ensuring… that AI systems operate safely and ethically is partly a technical challenge. How do we even build systems that have that property?

But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this panel is mainly about this second challenge. And I think it’s one on which global coordination is essential because the harms, whether it’s psychological damage to the next generation or loss of human control altogether, those harms cross borders. And we must coordinate to make sure that they don’t happen or they don’t originate anywhere. And it’s, I think, fitting that we are having this summit here in India, which has really, among other things, championed the idea that everyone on Earth should have a say. And so with that, I will hand over to Eileen. Thank you very much.

Nicolas Miailhe

Thank you, Stuart. So Dr. Eileen Donahoe is the founder and managing partner of Sympathico Ventures. She’s also the former U .S. Special Envoy and Coordinator for Digital Freedom and Ambassador to the UNHCR. Eileen? Welcome the speaker on the floor. Please, Your Excellency, Mr. Mattias Korman, Mr. Gobind Singh Deo, Mr. Josephine Teo, and Mr. Jann Tallinn, as well as Mr. Sangbu Kim, join us on stage.

Eileen Donahoe

Okay. Given this remarkable panel and the very short time we have, let me very briefly frame our discussion and get right to our speakers. So we’re here to share. Views on the opportunity for policymakers to impact international AI governance. As the race towards AGI and superintelligence intensifies, AI safety advocates face a compounding challenge. The technology is advancing rapidly and being deployed with minimal guardrails, while the risk management processes that do exist are either ill -adapted to the magnitude of the risk, fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators. The result is an unharmonized governance landscape that fails to shape the behavioral incentives. Of those building and funding frontier AI. Economies, governments, and societies do not respond well to such mixed signals.

While much of the discourse on frontier AI safety has focused on AI superpowers, there’s an urgent need for deeper international diplomacy on the most… extreme risks. At this juncture, middle powers and global majority states can’t be seen as peripheral actors in this landscape. Through pooled resources, market leverage, normative influence, and regulatory innovation, they can shape the direction of global AI practices and safeties. Leading from the middle may turn out to be a more powerful approach than previously anticipated. Whether or not that collective power is exercised now will determine whether international AI governance moves from the rhetorical level to the real -world impact on safety. This panel will aim to identify present -day coordination gaps in the global AI practice and the global market.

We will also look at the role of global AI in international AI safety and highlight practical steps policymakers can take in the coming months to close them. So to our panel, I’ll start with Secretary General Corman. The OECD has done remarkable work over the past decade, developing consensus on the OECD principles, providing a definition of AI systems that has resonated internationally, and playing an international role in operationalizing the Hiroshima International Code of Conduct. Along with those foundations, we now have the International AI Safety Report and the Singapore Consensus on Global AI Safety Research Priorities. With these principles, definitions, and frameworks in mind, two -part question for you. First, what are the key lessons learned from the process of building consensus and then implementing these frameworks?

And then second, looking ahead, what’s the most critical? What’s the most critical piece of coordinated frontier AI safety infrastructure we should be building now? Some have called for an international incident response center, and we’re all curious whether you think that should be a priority and achievable. Just some small, easy questions.

Mathias Cormann

In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is built through inclusion and on the basis of objective evidence. And, you know, I think what we’ve learned over the last few years is that bringing together all the relevant actors, governments, companies, civil society, technical experts, is what we need to do. I mean, each has a different perspective and different imperatives. I mean, markets reward the private sector for speed, scale, and innovation. While governments must manage risk and protect the public interest without stifling progress. But a challenge, and it’s been mentioned in some of the opening remarks, a challenge for policymakers in this context is that AI is moving much faster than policy cycles have traditionally moved, which easily then creates gaps between innovation and progress and opportunity, but necessary oversight, mitigation and management of risk.

But all sides in this conversation do share an essential common interest, and that is to ensure that the systems that are developing are trustworthy, because without public trust in the end, even the most powerful AI tools will struggle to gain broad adoption. So that means that occasionally, and it’s not always popular with everyone, but occasionally we should slow down. Occasionally we should actually pause. Pause, test, monitor, audit, share information, and take the time and invest in building confidence that these systems can work as intended and respect fundamental rights. So that’s sort of, I guess, the first point. another critical lesson involves international consistency and this is part of the reason why these sorts of summits are so important is to really facilitate these conversations among countries and among different jurisdictions because national priorities can vary quite widely and there’s of course fragmentation and compliance cost related risks and at the OECD really what we’ve been doing for six decades now across different policy areas is to try and reduce fragmentation and by achieving alignment around key principles, building shared evidence and facilitating the necessary conversations to develop a more coherent better coordinated approach moving forward and on AI I mean we’ve developed the OECD principles which were first adopted in 2019 and which are now adhered to by 50 countries around the world and that was really the first globally recognized baseline for trustworthy AI The OECD’s lifecycle definition of an I .I.

system has since shaped policy frameworks from the EU I .I. Act to U .S. executive orders. And we’ve had just earlier the meeting of the Global Partnership on I .I. co -chaired by Korea and Singapore. We’ve got the OECD I .I. Policy Observatory, which is sort of essentially the broad gamut of all of the different policy approaches around the world to provide countries and industries with data and evidence on what’s being done, facilitating peer learning, and trying to take some of the politics and the rhetoric out of it, but really looking at the facts. Now, looking ahead, and you sort of ask a question here about what to do about the risk. I mean, the most critical piece of frontier I .I.

safety infrastructure is coordinated. transparency and incident reporting. I mean, the Hiroshima I .I. Process Code of Conduct and its reporting framework launched at the I .I.’s Action Summit in Paris last year. You know, that’s a promising step, and we’ve got to continue to develop that. Since their publication, 25 organizations across nine countries have already submitted detailed reports on how they manage I .I. risks, offering for the first time a comparable view of developer practices across jurisdictions. The next stage is to strengthen information sharing on I .I. failures and near misses. The GPI I .I. Common Framework for Incident Reporting aims to help us collectively learn from mistakes before they scale globally, and over time, this could evolve into an international I .I.

Incident Response Center, coordinating alerts between governments and labs without exposing companies to commercial or legal penalties for reporting in good flight. Finally, we do need to scale access to practical safety tools. With global partners, the OECD recently launched an open call for open source safety and evaluation tools hosted in the OECD .ai catalog of tools and metrics to make a trustworthy AI easier to implement in practice. I mean, these are some initiatives to form the foundation of a more transparent, data -driven, and interoperable AI governance ecosystem, and

Eileen Donahoe

Excellent. Minister Teo, a number of questions for you, but let me start with the fact that Singapore occupies a very distinctive position in the global geostrategic landscape as a pro -innovation, advanced knowledge economy, with deep commercial and diplomatic ties to both the U .S. and China. Thank you. As the race to AGI intensifies and bilateral tensions mount, is there a role for Singapore and other middle powers to play in bridging the coordination gap to keep scientific and safety channels open? And also, what’s the most important step middle powers can take in the next 12 months to help establish a shared minimum understanding of frontier safety?

Josephine Teo

Well, thank you very much for that question. I think there is no running away from the fact that for smaller states, and that includes Singapore, the technology that our companies, our citizens are going to rely on do not originate from our shores. So they don’t necessarily come within our jurisdictions. We don’t always get to set the rules. Having said that, I do believe that we’re not without. Thank you. agency. It doesn’t mean that we take a step back and just let things happen to us. There are still things that we can do. One of the most important things I think as policymakers is for us to think about what it takes to translate what we know from science into policy.

And I wanted to just say why this is so important. In our case, as policymakers, the key questions will always be, are the policies that we make effective? And also, policies always come with trade -offs. With the question of effectiveness, there is always a need to understand what actually works, as opposed to what looks good on paper. With the question of trade -offs, it’s about understanding what we lose as a result of whatever safety aspects it is that we choose to put in place. And whether we can minimize them, can we mitigate them? Now, in areas where safety is the objective, we can’t just go with gut. We can’t just go with speculation. You take, for example, in my previous life, I was working on promoting Singapore’s Air Hub.

And we had to deal with a question of aviation safety. We were expanding our airport. It was going to carry many more passengers in and out of the country. But we are limited by number of runways. And in landscape Singapore, you can’t just click your finger and say, let’s build a new one. It’s a long runway. It’s very expensive anyway. Then there is the question of what do you do when you have these jumbo jets like A380s? Because each time an A380 hits the runway. It creates so much of a blast that you really need to create more distance between the A380 taking off and the next aircraft that is scheduled to take off.

Now, this is not a question that the transport minister can just decide on a whim. The air traffic management has to decide on its policy of how much distance is considered safe between landings or rather between takeoffs. And to answer this question, you really need to invest in the research. You need to invest in understanding the tests. So the science is one part of it. But between science to policy, you are actually going to need a lot of time. You are going to need a lot of tests. You are going to need a lot of simulations. you need to understand whether the distances that you decide are safe works well in a thunderstorm, a tropical thunderstorm.

Does it work just as well in a snowstorm? Well, we don’t have snow in Singapore. But you think about the airline that operates this. If each country that they fly into has a different safety distance, that creates some difficulty. So we therefore think that not only is there a need to invest in understanding the science, not only is there a need in understanding what testing looks like, what good testing looks like, there is also a need for us to think about what standards that will eventually be interoperable, what do they look like, which is why we think that international efforts, the collaboration that… that is being carried forward by the OECD through the Global Partnership on AI, the AI Safety Connect effort, and also ICI.

Where is Stuart now? Those kinds of efforts, you can’t do away without. At the outset, there is likely to be a bit of a fragmentation. And the trade -off with not having these conversations is that we are not even going to make advances in AI safety. And I don’t think that that’s a very good place for us to be in. It doesn’t give us the assurance that we can deliver to our citizens. And it does not create a foundation of trust that will eventually help us to push ahead with the use of this technology on a wider scale. So that’s how we are thinking about it, Aileen. Thank you.

Eileen Donahoe

So let me turn to Minister Gobind from Malaysia. and note that under your leadership and Malaysia’s 2025 ASEAN chairmanship, Malaysia succeeded in placing AI at the center of ASEAN’s agenda by establishing the ASEAN AI Safety Network. Malaysia is now finalizing its own AI National Action Plan, and Malaysia’s AI Governance Bill is expected in Parliament in 2026. So this dual -track approach of building national capacity while leading regional coordination represents a model of middle power agency that other countries are watching closely. So what lessons do you think other middle powers can draw from Malaysia’s experience? And on the ASEAN AI Safety Network, we have to note that operationalize and it will require sustained political will. technical capacity and resources.

So what concrete steps must ASEAN take in the next 12 to 18 months to ensure that this isn’t just aspirational?

Gobind Singh Deo

Online fraud, for example, scams, you have deepfakes today, you have huge concerns about certain vulnerable groups that are going to be impacted, children, older folk and so on and so forth. So this is something that stretches across the region. How do we deal with it in a coordinated way and ensure that the conversation doesn’t just stop with the government of the day, but it’s a conversation that expands over a period of time with clear policies that we can actually execute. The second layer that I think we need to think about is in the event there’s a need for execution. When we speak about risks in AI and we speak about how we’re going to govern these risks, we often talk about standards.

We often talk about regulation. We even speak about legislation at times for areas that pose higher risks. But ultimately, it really comes back down to you making sure you have an agency that can enforce it, because you can have the best standards. regulations and legislation but if there is no institution that’s really able to implement those standards to ensure that they are properly implemented and also to ensure that rules for failure to implement are enforced then those standards regulation and policies are really going to be just strong on paper but they’re not going to really have that impact that you need. So again, how do you build this mechanism across ASEAN where every country strengthens themselves domestically first and then moves across to the ASEAN member states and hopes to learn from their experiences so that we can together move ahead in this new world of AI and I think the threats that we anticipate in future.

Now the third part which is really important is also ensuring that whilst this goes on, you create those policies, you have institutions that enforce and the discussions persist at an ASEAN level. I think what is important is also to have that expertise looking at what comes next. We must make sure that our countries are prepared for the risks that are to come with the next generation technology. This is important because you don’t want a situation where new technology is adopted and there are risks that come with this new technology, you’re not prepared. I think that’s something we want to avoid and that’s the reason why I come back to where I started off. We really need to look at building institutions that have the expertise and of course are able to sustain as we go along and to build and deliver something that’s impactful.

Sorry, but that’s in short what we’re doing in Malaysia today.

Eileen Donahoe

Excellent. Thank you so much. Okay. Let me turn to Vice President Kim and talk about the World Bank, which has been at the forefront of digital public infrastructure, helping countries leapfrog legacy systems. We note that frontier AI systems, though, are arriving in the global south under very different conditions from previous waves of technology and governments are under pressure to deploy AI systems quickly. often using models that haven’t been adequately tested, let alone certified for their context, languages, or risk tolerances. So how can the World Bank help Global South countries move from being passive recipients of frontier AI to active shapers of safety and reliability requirements before the systems are deployed at scale?

Sangbu Kim

Thank you. In one word, definitely we need to make our clients well prepared from the scratch. When they design the AI systems, definitely they need to design the safety architecture within the system. That’s very, in general, that’s very correct. But real challenge is that… nobody can really expect a new type of new threat especially our some countries in a low capacity it is really hard to figure out what that will be so that’s the in order to tackle that type of irony and dilemma we need to very closely working with very developed economies company and government and very high end examples so that we can really well connect those good examples to the developing world so one partnership is one of the good examples we are helping our country for example some big tech company who is running some red teams so that you they are trying very hard to attack their system in advance by fully utilizing AI.

So through that type of practice and experiment, they can learn how to prevent the AI attack in the future, which is pretty much possible. So in this way, it is inevitable for our developing countries to keep track on the new trend and new innovation, even in this safety protection area. It is the only way. So I have to admit that this constraint. But think about this. Some anecdotal story in East Asia, in China and in Korea, there’s two models. Merchant who is selling two products. Number one is. sphere. And then they keep saying that this sphere is so strong so it can get through any kind of shield. So this is one vendor. The other vendor is selling shield.

And then they are saying that this shield is one of the most safe and strong shields. No sphere can get through this shield. This is exactly an ironical situation. If you think about AI, AI attack is the sphere. AI is so strong and smart and really capable so it can get through and hack any system with high -end intelligence and knowledge. But good news is that on the other hand, we also can build strong protective systems. by fully utilizing AI. So this is one good news, but the constraint is that we do not clearly know how AI can really evolve to fully protect those big attacks in the future. So in order to solve this type of ironical situation from the developing world point of view and from the World Bank point of view, this is the only way to very closely work and collaborate and learn from the advanced technology and advanced company and advanced country.

Eileen Donahoe

Thank you so much. Last but not least, Mr. Jan Tallinn, you occupy a very rare position in this landscape as a founding engineer of Skype, an early investor in DeepMind and Anthropic, and you’re also the co -founder of the Future of Life, which last October released a statement on superintelligence. calling for prohibition on superintelligent development until two conditions are met. Number one, broad scientific consensus that it can be done safely and controllably, and second, strong public buy -in. Let’s just ask the hard question. What would an effective prohibition look like in practice? How could that work?

Jann Tallinn

Thank you very much. So I think I’m kind of like a little bit different from the people on this panel. And that too, I guess. That I’m kind of, my main kind of threat vector about, my main worries about future are less about like how AI is being deployed and diffused and taken into practice. And I’m way more worried about what is happening in the labs, in the top AI companies. I’m not sure what the future is going to look like. because they are now in a cutthroat race to build something that is smarter than they are. They are in a cutthroat race to build superintelligence. And, like, I mean, we just saw yesterday the picture where, with a photo of it, Narendra Modi, Dario Amadei, and Sam Altman refused to link hands.

I mean, this is, like, indicative. We also saw both Dario and Demis Hassabis call for a slowdown in Davos last month. They just can’t do it alone. And I think there are, like, two reasons why it’s, like, an unfortunate situation. One is that the U .S. as a country is conflicted. They basically rely on AI for their economic and competitive power. So they are, like, very hesitant to, kind of, meddle with now. cutthroat situation in AI companies and the rest of the world really doesn’t understand how big danger they are now. So it’s part of the reason why we did the superintelligence statement is to create awareness that there is increasing political demand to do something about this situation.

We now have more than 130 ,000 signatures which is like many times more than we had done our original six months post letter had in 2023. So yeah, that’s like if there was enough pressure, I think clearly like the rest of the world is still kind of more powerful than the kind of leading AI countries. There are more people, there’s more economic power, etc. So if there was like enough pressure this could be solved. Like the way I put it is that it’s super hard to do like a $10 billion project. it’s impossible to do it if it’s illegal. So having these trillions flow into AI actually makes it easier to govern than harder.

Eileen Donahoe

So I’m tempted to follow up with a question about investors and their potential role in this. They are obviously playing a decisive role in shaping the incentives, but they’re largely absent from the governance conversation. So what would it take to bring investors meaningfully into the safety conversation?

Jann Tallinn

So, yeah, I think the answer is kind of simple. I don’t think investors play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them. They will now IPO soon. And if you are like an IPO market, there is… like, like, so level playing field, which means that like, if somebody’s not funding, somebody else will. So I don’t think investors, investors could have affected things, but like, five, 10 years ago.

Eileen Donahoe

Great. Okay, so since we’re running short on time, I’m going to ask one question, and ask you all to answer it, which is about the 12 month window. Oh, the very shortly, each shortly. Many in the AI safety community believe we have a narrow window, perhaps 12 to 24 months before frontier AI capabilities advance beyond our ability to evaluate and govern them. So what would you recommend is prioritized between now and we’re basically in the next year to two years, each of you to enhance safety? and security?

Josephine Teo

I think there are two, really. I think the AI safety research priorities need to be refreshed because the field has moved so quickly. The Singapore consensus identified a set, but as soon as they are published, we recognize that they will be out of date. So we need to refresh it. That’s why we’re going to have the second edition, you know, worked on. Hopefully in a few months. The second thing I think is that we can’t just keep thinking about frameworks, you know, and guidelines. At some point, we need to be able to introduce better testing tools. And until we are able to do so, the companies that are developing and deploying AI models, they also don’t have a very practical way of giving assurance.

So I’d like to see in the next 12 months some further advancements. In those two areas.

Mathias Cormann

I’ll be really quick I know there’s always a temptation in these sorts of conversations, what is the one thing that can sort of fix it all and the truth is there’s not one thing we’ve got to go as fast as we can to play catch up to a degree but we’ve also got to go as comprehensive and as deep as we can there’s just no alternative, there’s catch up to be played, we’ve got to put a real effort and it’s got to be right across the board and I don’t think that you can just say there’s the one thing that will make us all safe and it’s going to be okay.

Eileen Donahoe

Minister Gobind?

Gobind Singh Deo

I think as I said earlier, we need to start thinking how we can build structures and perhaps institutionalize this entire conversation about building security around AI and its governance in this regard, we have to understand that things are going to move very quickly and you’re going to see new technology develop very fast which brings new risks as well, so in that regard, you’ve got to build something that’s sustainable and I think in order to do that, institutionalizing it should be a priority.

Sangbu Kim

everyone is really rushing for ai system development ai solution development that means ai is currently ai safety measures currently under invested so i really like to urge all of us to think about this is not free you know things we need to spend some money to protect the system in advance from the scratch when you design the system so that means we should allocate some money to fully invest in in the

Eileen Donahoe

Jann Tallinn?

Jann Tallinn

so slow down we really need to slow down that the companies are asking for it and if we like instrumental to that would be basically transparency like more people should know what the leaders of ai companies know in order to basically understand how crucial the slowdown now is

Eileen Donahoe

okay great well i believe we have a little bit of a close coming and thank you all so much i wish we had had a day to talk about all of these issues. But thank you so much. Thank you very much.

Nicolas Miailhe

Thank you very much, Eileen, and this fantastic panel, excellencies, colleagues, friends. What we’ve heard today confirms something important. The coordination gap frontier in AI safety is real, and it is urgent. And as we’ve discussed today, it is closable. And before I hand over the floor to Osama Manzara to close off for a few minutes of remarks and reflection, I’d like to invite you all to the United Nations General Assembly next edition in New York, where we hope to organize the fourth edition of AI Safety Connect, and hopefully with many of the great policymakers and leaders we have heard from today, to carry forward that collective effort. Osama, the floor is yours.

Osama Manzar

Well, thank you very much. And we are one of those absentee co -organizer in this one. So, you know, because being a local, but I just want to I mean, apart from thanking each one of you who didn’t get up and, you know, go out of the room. And every one of you who gave all the safety remarks before usage of AI on behalf of 40 million people that we have reached out in the last 23 years. And billions of the other people whom we are going to work for. I want to suggest that the entire safety aspect of AI should be more from please save people from AI. Right. Because that’s the safety like it’s a car on the road.

You know, we have to save people before you teach people how to think. So we also have to keep a very, very strong thing. How do we save human intelligence from artificial intelligence? And how do we inbuilt in the safety guards and all the ethics and all the all the, you know, policy playbooks? Thank you very much. Thank you. Bye. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The race towards artificial intelligence is no longer a theoretical pursuit; billions and maybe trillions of dollars are being deployed to push the AI frontier, and safety is not keeping pace with it.”

The knowledge base states that the race toward AI is no longer theoretical, that billions-trillions of dollars are being invested, and that safety is lagging behind the rapid technological advance [S1].

Confirmedhigh

“The coordination gap frontier in AI safety is real, urgent, and can be closed.”

A stakeholder’s opening remarks explicitly note that the coordination gap in AI safety is real and urgent, echoing the panel’s assessment [S11].

Additional Contextmedium

“Artificial intelligence is advancing at a rapid pace.”

An open-forum primer describes AI as advancing rapidly, providing broader context for the claim about fast technological progress [S108].

Additional Contextlow

“Technological development in AI is not without risk.”

Discussion notes highlight that AI development carries risk, adding nuance to the safety concerns raised in the report [S96].

External Sources (109)
S1
Policymaker’s Guide to International AI Safety Coordination — -Gobind Singh Deo- Minister from Malaysia (leading Malaysia’s 2025 ASEAN chairmanship)
S2
Malaysia: Fake News Act — The newMalaysian Minister of Communications and Multimedia, Gobind Singh Deo, said on 21 May that the Fake News Act in M…
S3
S4
TALLINN MANUAL 1.0 INT — 2 Af fi liations during participation in the project. 978-1-107-17722-2 – Tallinn Manual 2.0 on the International Law A…
S5
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — -Mathias Cormann- Secretary General, OECD (Organisation for Economic Co-operation and Development) -Moderator- Role: Ev…
S7
S8
Driving U.S. Innovation in Artificial Intelligence — 13. Stuart Appelbaum – President, Retail Wholesale and Department Store Union 14. Stuart Ingis – Chairman, Venable 15. …
S9
S10
Acknowledgements — In addition to coordinating simultaneous attacks on a single target, such UAVs could disperse to find and attack a la…
S11
https://dig.watch/event/india-ai-impact-summit-2026/policymakers-guide-to-international-ai-safety-coordination — And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moder…
S12
Policymaker’s Guide to International AI Safety Coordination — – Nicolas Miailhe- Eileen Donahoe- Jann Tallinn- Josephine Teo – Nicolas Miailhe- Mathias Cormann- Stuart Russell- Jose…
S13
IGF 2023 Global Youth Summit — Nicolas Fiumarelli:Thank you, Lily. My name is Nicolas Fiumarelli. Hello everyone. Today I am here in place of Umut, who…
S14
Policymaker’s Guide to International AI Safety Coordination — And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moder…
S15
https://dig.watch/event/india-ai-impact-summit-2026/policymakers-guide-to-international-ai-safety-coordination — And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moder…
S16
The Declaration for the Future of the Internet: Principles to Action — A key figure tackling this connectivity challenge is Zeyna Bouharb, serving as head of international cooperation at Oger…
S17
Hack the Digital Divides | IGF 2023 Day 0 Event #19 — Moderator – Peter A. Bruck:Can I ask the technical support to see if we can put the slides in? Is that good? Hello, good…
S18
S19
WS #211 Disability & Data Protection for Digital Inclusion — Osama Manzar emphasizes focusing on abilities and involving persons with disabilities in service provision, while Maitre…
S20
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Josephine Teo- Role/title not specified (represents Singapore)
S22
S23
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — This comment addresses the fundamental challenge that cybersecurity threats are global while responses are often nationa…
S24
Towards a Safer South Launching the Global South AI Safety Research Network — Dr. Balaraman Ravindran from IIT Madras raised important questions about coordination, noting that multiple AI safety ne…
S25
State of play of major global AI Governance processes — Its flexibility and adaptability are praised for bridging institutional, cultural, and regional practices. A cooperative…
S26
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S27
Dedicated stakeholder session (in accordance with agreed modalities for the participation of stakeholders of 22 April 2022) — Arab Association of Cybersecurity: Honorable Chair, distinguished delegates, esteemed colleagues and stakeholders, it’s …
S28
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — Capacity Building Initiatives Capacity building and support mechanisms are crucial for meaningful stakeholder engagemen…
S29
Closing plenary: multistakeholderism for the governance of the digital world — Min Jiang:Developing such working methods should strive to avoid conflicts with or duplication of existing processes or …
S30
Towards 2030 and Beyond: Accelerating the SDGs through Access to Evidence on What Works — The level of disagreement among the speakers was minimal. This high level of agreement implies a strong consensus on the…
S31
Multistakeholder Model – Driver for Global Services and SDGs | IGF 2023 Open Forum #89 — At the heart of ICANN’s work lies the multi-stakeholder model, which shapes policies and manages unique identifiers. Thi…
S32
Building a Global Partnership for Responsible Cyber Behavior | IGF 2023 Launch / Award Event #69 — Eugene EG Tan:the misuse of those kinds of technologies? Thank you. It’s a great question, and there’s probably a very l…
S33
Advancing Scientific AI with Safety Ethics and Responsibility — Artificial intelligence | Building confidence and security in the use of ICTs | Monitoring and measurement Open source …
S34
Safe and Responsible AI at Scale Practical Pathways — “Deep work on working on fragmented data silos.”[5]. “It can be bridged but we have to think about how to make data inte…
S35
AI Meets Cybersecurity Trust Governance & Global Security — I mean, one of the most sacred things for us right now is to maintain public trust in our institutions. It’s a little ch…
S36
Building Trust through Transparency — Additionally, the speakers mention that in case of fraud or data leakage on the merchant’s end, the liability also falls…
S37
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — Eileen Donahoe echoed this sentiment, advocating for universal safeguards to protect human rights in DPGs and DPI. This …
S38
Knowledge Café: WSIS+20 Consultation: Strenghtening Multistakeholderism — Both speakers recognize that current governance processes are fragmented and overly complex, requiring better coordinati…
S39
Upholding Human Rights in the Digital Age: Fostering a Multistakeholder Approach for Safeguarding Human Dignity and Freedom for All — Eileen Donahoe:It’s difficult. So many good questions and so many layers to them. I will start with the two points by ac…
S40
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S41
Hard power of AI — The analysis comprises multiple arguments related to technology, politics, and AI. One argument suggests that the rapid …
S42
Ethics and AI | Part 1 — Once brought to commercial existence, digital technologies raise multiple safety and security issues, which could have b…
S43
What is it about AI that we need to regulate? — For indigenous communities, the challenge is even more acute. InOpen Forum #73 Indigenous Peoples Languages in a Digital…
S44
Why science metters in global AI governance — I should just add that on this score, it will be much better if we can cooperate internationally to develop sound approa…
S45
Smart Regulation Rightsizing Governance for the AI Revolution — This comment is deeply insightful because it cuts through the optimistic summit rhetoric to present a stark geopolitical…
S46
WS #103 Aligning strategies, protecting critical infrastructure — Several international initiatives and tools were mentioned:
S47
Roundtable — A focus on infrastructure that has an immediate impact on human life, such as transportation, power supply, healthcare, …
S48
Opening of the session — Capacity building is essential for political and institutional resource development.
S49
Building Capacity in Cyber Security — 3. Strengthening institutional capabilities: Building capacity in cybersecurity involves equipping institutions such as …
S50
WSIS Action Line C7: E-Agriculture — Development | Capacity development | Legal and regulatory Since IFAD works through public sector investments to governm…
S51
Indias AI Leap Policy to Practice with AIP2 — “they are deliberately delayed because there are some private sector actors that don’t want these standards to be there …
S52
AI leaders call for a global pause in superintelligence development — More than 850 public figures, including leading computer scientists Geoffrey Hinton and Yoshua Bengio,have signeda joint…
S53
Artificial Intelligence & Emerging Tech — Certain principles, like “human in the loop,” can have different interpretations at different stages of AI deployment. A…
S54
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S55
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — The analysis also suggests that responsible development, governance, regulation, and capacity building should be multi-s…
S56
GOVERNING AI FOR HUMANITY — – 120 Supported by the proposed AI office, the standards exchange would also benefit from strong ties to the internation…
S57
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Matilda Road:Mathilda, over to you. Thank you, Florian. Good morning everyone. It’s great to see so many of you here and…
S58
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And so when you think about the kind of infrastructural needs, it’s so it creates barriers for a lot of countries in the…
S59
Press Conference: Closing the AI Access Gap — Countries need robust data strategies that include sharing frameworks and data protection measures. These strategies are…
S60
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in …
S61
What Proliferation of Artificial Intelligence Means for Information Integrity? — Specifically mentioned ‘transparency for frontier models’, ‘trust and safety, an investment in trust and safety, especia…
S62
Policymaker’s Guide to International AI Safety Coordination — But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this …
S63
Towards a Safer South Launching the Global South AI Safety Research Network — Dr. Balaraman Ravindran from IIT Madras raised important questions about coordination, noting that multiple AI safety ne…
S64
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — This comment addresses the fundamental challenge that cybersecurity threats are global while responses are often nationa…
S65
Advancing Scientific AI with Safety Ethics and Responsibility — High level of consensus with significant implications for AI governance policy. The agreement across speakers from diffe…
S66
The mismatch between public fear of AI and its measured impact — Inmedicine and science, AI has shown promise in pattern recognition and data analysis. Deployment is cautious, as clinic…
S67
Day 0 Event #255 Update Required Fixing Tech Sectors Role in Conflict — Companies unwilling to engage beyond policy references; governments taking less responsibility leaving burden on investo…
S68
About the Commission — However, consistency and predictability in each and every aspect of the environment – be they political, economic, finan…
S69
Blended Finance’s Broken Promise and How to Fix It / Davos 2025 — Leila Fourie points out that the perception of risk in emerging markets is a significant barrier to investment. This per…
S70
PrefACe — The National Broadband Plan recognizes that making the right policy choices at home that result in domestic market succe…
S71
India unveils AI incident reporting guidelines for critical infrastructure — India isdevelopingAI incident reporting guidelines for companies, developers, and public institutions to report AI-relat…
S72
OPENING SESSION | IGF 2023 — Ulrik Vestergaard Knudsen:Thank you very much. It seems I have the opposite challenge compared to the previous speaker, …
S73
AI and EDTs in Warfare: Ethics, Challenges, Trends | IGF 2023 WS #409 — In conclusion, the discussions surrounding AI and emerging technologies in warfare highlight the potential benefits and …
S74
The Dawn of Artificial General Intelligence? / DAVOS 2025 — Yoshua Bengio advocates for substantial investment in AI safety research alongside the development of AI capabilities. H…
S75
AI and international peace and security: Key issues and relevance for Geneva — Regional Cooperation Mechanisms: Building regional cooperation mechanisms can significantly enhance the governance of AI…
S76
Laying the foundations for AI governance — This discussion revealed both the substantial challenges in translating AI governance principles into practice and the s…
S77
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S78
Ethics and AI | Part 1 — Once brought to commercial existence, digital technologies raise multiple safety and security issues, which could have b…
S79
WS #64 Designing Digital Future for Cyber Peace & Global Prosperity — Rapid pace of technological change outpacing policy frameworks
S80
Hard power of AI — The analysis comprises multiple arguments related to technology, politics, and AI. One argument suggests that the rapid …
S81
Global AI Governance: Reimagining IGF’s Role & Impact — Ivana Bartoletti: Thank you very much and so sorry for not being able to be physically with you. So I think I wanted to …
S82
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Tomiwa Ilori:Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiat…
S83
Asia’s middle powers could shape AI governance framework — The European Union, China, and the United States may set benchmarks for AI governance. Still, Asia’s middle powers have …
S84
https://dig.watch/event/india-ai-impact-summit-2026/policymakers-guide-to-international-ai-safety-coordination — While much of the discourse on frontier AI safety has focused on AI superpowers, there’s an urgent need for deeper inter…
S85
Closure of the session — Intersessional technical meetings and working groups should focus on critical infrastructure, incident response, and int…
S86
Future of International Cyber Diplomacy: Comprehensive Discussion Report — Practical tools for incident response and cooperation still need development
S87
Opening of the session — Chair: Thank you very much, Ms. Nakamitsu, for your very detailed and comprehensive overview of the work that we have…
S88
Opening of the session — Capacity building is essential for political and institutional resource development.
S89
HIGH LEVEL LEADERS SESSION I — Institutions should have the capacity for enforcement to ensure adherence to any rules that are set in place
S90
Media Hub — Need law enforcement, judiciary, court system, judges to understand cyber space and offenses, lawyers to be trained, pol…
S91
(Plenary segment) Summit of the Future – General Assembly, 4th plenary meeting, 79th session — International Development Law Organization: Mr. President, Excellencies, it is a pleasure to participate in the summit…
S92
How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums | IGF 2023 Open Forum #96 — Institutional capacity building is vital for civil societies. By strengthening their institutional structures, civil soc…
S93
AI leaders call for a global pause in superintelligence development — More than 850 public figures, including leading computer scientists Geoffrey Hinton and Yoshua Bengio,have signeda joint…
S94
Indias AI Leap Policy to Practice with AIP2 — He points out that some private‑sector actors deliberately slow standards development, and calls for mechanisms that imp…
S95
DeepSeek AI shake-up affects Bitcoin and tech stocks — Bitcoin experienced a 6% drop on 27 January, as stock markets reacted to the debut of China’s open-source AI model, Deep…
S96
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S97
9821st meeting — Mr. President, as the Secretary General has noted, artificial intelligence represents both the greatest opportunity, and…
S98
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S99
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Audience:Thank you. Thank you so much. I represent you from Chinese mission. We appreciate Her Excellency, Ambassador Es…
S100
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Ahmad Bhinder: Hello. Good afternoon, everybody. I see a lot of faces from all around the world, and it is really, re…
S101
AI in practice across the UN system: UN 2.0 AI Expo — TheUN 2.0 Data & Digital Community AI Expoexamined how AI is currently embedded within the operational, analytical and i…
S102
The Commonwealth AI Consortium will gather in New York to develop the AI action plan — The Commonwealth Artificial Intelligence Consortium (CAIC)members will meet during the UN General Assembly in New York t…
S103
Artificial intelligence (AI) – UN Security Council — Additionally, the development of AI systems should involve collaboration with local communities to better understand cul…
S104
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S105
Introduction — | Term | EU definition …
S106
AI safeguards prove hard to define — Policymakers seeking to regulate AI face an uphill battle as the science evolves faster than safeguards can be devised.E…
S107
Comprehensive Report: Cyber Fraud and Human Trafficking – A Global Crisis Requiring Multilateral Response — Speed of response and enforcement capabilities The Minister emphasizes that governments must act together due to the tr…
S108
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S109
Meta joins the tech giants’ race for AGI — Meta, the parent company of Facebook, has entered the race for Artificial General Intelligence (AGI).Meta CEO Mark Zucke…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Stuart Russell
1 argument119 words per minute250 words125 seconds
Argument 1
AI safety requires worldwide coordination because harms cross borders (Stuart Russell)
EXPLANATION
Russell emphasizes that AI‑related harms such as psychological damage or loss of human control are not confined to any single country, making international coordination essential to prevent or mitigate these risks.
EVIDENCE
He stated that the harms, whether it’s psychological damage to the next generation or loss of human control altogether, those harms cross borders, and we must coordinate to make sure that they don’t happen or they don’t originate anywhere [44-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Policymaker’s Guide stresses that AI-related harms cross borders and calls for global coordination to prevent them [S1].
MAJOR DISCUSSION POINT
Need for global coordination on AI safety
AGREED WITH
Nicolas Miailhe, Mathias Cormann, Eileen Donahoe
N
Nicolas Miailhe
2 arguments149 words per minute812 words325 seconds
Argument 1
AI Safety Connect convenes regular global summits and UN sessions to shape a unified safety agenda (Nicolas Miailhe)
EXPLANATION
Miailhe describes AI Safety Connect’s model of convening at AI summits worldwide and at the UN General Assembly, with a six‑month cadence, to accelerate safety discussions and build capacity.
EVIDENCE
He explained that they convene at each AI summit, started in Paris, then India, next Switzerland, also at the UN General Assembly, and hold global convenings every six months to speed up safety discussions [11-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The guide notes that AI Safety Connect convenes at each AI summit and holds semi-annual global convenings to accelerate safety discussions [S1].
MAJOR DISCUSSION POINT
Regular global convenings for AI safety
Argument 2
Capacity‑building and trust‑building exercises are vital for preparing stakeholders (Nicolas Miailhe)
EXPLANATION
Miailhe notes that beyond public events, AI Safety Connect conducts behind‑closed‑door capacity‑building and trust‑building activities to ready stakeholders for AI safety challenges.
EVIDENCE
He mentioned that they also do capacity building and trust building exercises at times behind closed doors during the intensive week in New Delhi [15-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building and trust-building are highlighted as essential for stakeholder readiness in the capacity-building initiatives report [S28].
MAJOR DISCUSSION POINT
Importance of capacity and trust building
AGREED WITH
Josephine Teo, Sangbu Kim, Mathias Cormann
M
Mathias Cormann
5 arguments145 words per minute864 words356 seconds
Argument 1
Building consensus through inclusive, evidence‑based processes is key to effective governance (Mathias Cormann)
EXPLANATION
Cormann argues that trust is earned by including all relevant actors—governments, industry, civil society, and technical experts—and grounding decisions in objective evidence.
EVIDENCE
He said trust is built through inclusion and on the basis of objective evidence, and that bringing together all relevant actors is what we need to do [77-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Policymaker’s Guide echoes this, stating that trust is built through inclusion of all relevant actors and reliance on objective evidence [S1].
MAJOR DISCUSSION POINT
Inclusive, evidence‑based consensus building
AGREED WITH
Nicolas Miailhe, Gobind Singh Deo
Argument 2
Trust is built through inclusion of governments, industry, civil society, and technical experts (Mathias Cormann)
EXPLANATION
He reiterates that a shared interest in trustworthy systems requires the participation of diverse stakeholders, each bringing distinct perspectives and imperatives.
EVIDENCE
He highlighted that bringing together governments, companies, civil society, and technical experts is essential for building trust and ensuring systems are trustworthy [77-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusion of governments, industry, civil society and technical experts as a trust-building mechanism is affirmed in the guide’s discussion of inclusive, evidence-based governance [S1].
MAJOR DISCUSSION POINT
Stakeholder inclusion for trust
AGREED WITH
Nicolas Miailhe, Gobind Singh Deo
Argument 3
Coordinated transparency and incident reporting are critical; an international incident response centre should be pursued (Mathias Cormann)
EXPLANATION
Cormann identifies coordinated transparency and incident reporting as the most critical frontier‑AI safety infrastructure, proposing a global incident response centre to share failure data without penalising reporters.
EVIDENCE
He described coordinated transparency and incident reporting as the most critical piece, referenced the Hiroshima Code of Conduct reporting framework, noted 25 organizations have submitted reports, and outlined the GPI Common Framework for Incident Reporting that could evolve into an international incident response centre [91-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The OECD common framework for incident reporting and the guide’s emphasis on coordinated transparency support the need for a global incident response centre [S5][S1].
MAJOR DISCUSSION POINT
Need for global incident reporting infrastructure
AGREED WITH
Eileen Donahoe
DISAGREED WITH
Gobind Singh Deo
Argument 4
Open‑source safety tools and metrics are needed to make trustworthy AI practical (Mathias Cormann)
EXPLANATION
He points out that the OECD has launched an open call for open‑source safety and evaluation tools, which will be catalogued to help implement trustworthy AI in practice.
EVIDENCE
He noted that the OECD recently launched an open call for open source safety and evaluation tools hosted in the OECD.ai catalog of tools and metrics to make trustworthy AI easier to implement [98-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source safety and evaluation tools are identified as crucial for practical trustworthy AI in the open-source tools report [S33].
MAJOR DISCUSSION POINT
Open‑source tools for practical AI safety
AGREED WITH
Nicolas Miailhe, Josephine Teo, Sangbu Kim
Argument 5
Periodic pauses for testing, auditing, and monitoring are necessary to maintain public trust (Mathias Cormann)
EXPLANATION
Cormann suggests that occasional slow‑downs—pausing to test, monitor, audit, and share information—are essential to build confidence that AI systems respect fundamental rights and earn public trust.
EVIDENCE
He said that occasionally we should pause, test, monitor, audit, share information, and invest in building confidence that these systems can work as intended and respect fundamental rights [84-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Maintaining public trust through testing, auditing and monitoring is discussed in the public-trust governance commentary [S35].
MAJOR DISCUSSION POINT
Pausing for testing to sustain trust
AGREED WITH
Jann Tallinn
E
Eileen Donahoe
2 arguments122 words per minute1101 words539 seconds
Argument 1
Current governance is fragmented; policymakers must close gaps and create binding incentives (Eileen Donahoe)
EXPLANATION
Donahoe describes a governance landscape that is unharmonised, fragmented across jurisdictions, and lacking binding incentives for developers and investors, which hampers effective AI risk management.
EVIDENCE
She explained that the technology is advancing rapidly with minimal guardrails, while risk-management processes are ill-adapted, fragmented across jurisdictions, or insufficiently binding, resulting in an unharmonized governance landscape that fails to shape incentives [56-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fragmented AI governance and the need for harmonisation and binding incentives are highlighted in the multistakeholder coordination analysis [S38][S39].
MAJOR DISCUSSION POINT
Fragmented AI governance needs binding incentives
AGREED WITH
Mathias Cormann
Argument 2
Middle powers can leverage pooled resources and normative influence to steer AI safety (Eileen Donahoe)
EXPLANATION
Donahoe argues that middle powers, through pooled resources, market leverage, normative influence, and regulatory innovation, can shape global AI practices and safety outcomes more effectively than previously thought.
EVIDENCE
She stated that middle powers can, through pooled resources, market leverage, normative influence, and regulatory innovation, shape the direction of global AI practices and safeties, and that leading from the middle may be a more powerful approach [62-64].
MAJOR DISCUSSION POINT
Role of middle powers in AI safety
AGREED WITH
Gobind Singh Deo, Josephine Teo
G
Gobind Singh Deo
3 arguments174 words per minute535 words183 seconds
Argument 1
ASEAN AI Safety Network exemplifies regional coordination to align standards (Gobind Singh Deo)
EXPLANATION
Gobind highlights the ASEAN AI Safety Network as a regional mechanism that places AI at the centre of ASEAN’s agenda, aligning standards and fostering cooperation among member states.
EVIDENCE
He noted that under Malaysia’s leadership, ASEAN placed AI at the centre of its agenda by establishing the ASEAN AI Safety Network, representing a model of regional coordination [152-155].
MAJOR DISCUSSION POINT
Regional coordination via ASEAN AI Safety Network
AGREED WITH
Eileen Donahoe, Josephine Teo
Argument 2
Malaysia’s dual‑track national plan and regional network offers a model for other middle powers (Gobind Singh Deo)
EXPLANATION
Gobind describes Malaysia’s approach of simultaneously building national AI capacity (AI National Action Plan, AI Governance Bill) while leading regional coordination through the ASEAN AI Safety Network, offering a replicable model.
EVIDENCE
He explained that Malaysia, as ASEAN chair, placed AI at the centre of the agenda, is finalising its AI National Action Plan, and expects an AI Governance Bill in 2026, illustrating a dual-track approach of national capacity building and regional coordination [152-156].
MAJOR DISCUSSION POINT
Dual‑track national and regional AI strategy
Argument 3
Enforcement agencies and institutional capacity are essential for implementing standards across ASEAN (Gobind Singh Deo)
EXPLANATION
Gobind stresses that without dedicated agencies to enforce standards, regulations, and legislation, AI governance will remain merely paper‑based and ineffective across ASEAN.
EVIDENCE
He argued that standards, regulation, and legislation require an agency capable of enforcement; otherwise they remain strong on paper but lack impact, and called for building mechanisms across ASEAN that strengthen institutional capacity [162-166].
MAJOR DISCUSSION POINT
Need for enforcement institutions in ASEAN
AGREED WITH
Mathias Cormann, Nicolas Miailhe
DISAGREED WITH
Mathias Cormann
S
Sangbu Kim
3 arguments112 words per minute525 words280 seconds
Argument 1
The World Bank can help Global South nations design safety‑by‑design AI systems (Sangbu Kim)
EXPLANATION
Kim suggests that the World Bank should assist developing countries by ensuring AI systems are designed with safety architecture from the outset, leveraging partnerships with advanced economies and tech firms.
EVIDENCE
He said the World Bank can help clients be well prepared from the scratch, design safety architecture within AI systems, and work closely with advanced economies, companies, and high-end examples to transfer good practices to the developing world [176-179].
MAJOR DISCUSSION POINT
World Bank support for safety‑by‑design AI
Argument 2
Safety architecture must be embedded from the design stage, with dedicated investment in protection mechanisms (Sangbu Kim)
EXPLANATION
Kim emphasizes that safety must be built into AI at the design phase, requiring investment and collaboration with high‑end partners to develop red‑team practices and protective measures.
EVIDENCE
He noted the need to design safety architecture from the start, invest in protection, and collaborate with advanced economies and companies running red-team exercises to learn how to prevent AI attacks [178-182].
MAJOR DISCUSSION POINT
Embedding safety architecture early
AGREED WITH
Nicolas Miailhe, Josephine Teo, Mathias Cormann
Argument 3
The World Bank can partner with advanced economies to transfer safety best practices to developing countries (Sangbu Kim)
EXPLANATION
Kim reiterates that close collaboration with advanced economies and tech firms is essential for the World Bank to convey best‑practice safety solutions to low‑capacity nations.
EVIDENCE
He described the necessity of working closely with advanced economies, companies, and high-end examples to connect good practices to the developing world, highlighting partnership as the only way forward [180-184].
MAJOR DISCUSSION POINT
Partnerships for safety knowledge transfer
J
Josephine Teo
3 arguments143 words per minute889 words371 seconds
Argument 1
Singapore can bridge coordination gaps despite limited jurisdiction by translating science into effective policy (Josephine Teo)
EXPLANATION
Teo explains that although AI systems originate abroad, Singapore can influence safety by converting scientific insights into actionable policies, emphasizing research, testing, and standards.
EVIDENCE
She noted that smaller states rely on external technology, but Singapore can translate science into policy, citing the need to understand what works, trade-offs, and the importance of research, testing, simulations, and interoperable standards, illustrated with an aviation safety example [104-112][119-136].
MAJOR DISCUSSION POINT
Science‑to‑policy translation in Singapore
AGREED WITH
Eileen Donahoe, Gobind Singh Deo
Argument 2
Robust research, testing, and interoperable standards are required to turn scientific insights into policy (Josephine Teo)
EXPLANATION
Teo stresses that effective AI policy demands extensive research, rigorous testing, simulations across conditions, and the development of interoperable standards to ensure safety across jurisdictions.
EVIDENCE
She described the need for investment in research, testing, simulations (e.g., aviation runway distances under different weather), and the creation of interoperable standards, noting that without such work, safety cannot be assured [110-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of research, rigorous testing and interoperable standards for policy effectiveness is emphasized in the data-interoperability and evidence-based decision-making reports [S34][S30].
MAJOR DISCUSSION POINT
Research, testing, and standards for policy
AGREED WITH
Nicolas Miailhe, Sangbu Kim, Mathias Cormann
Argument 3
Singapore emphasizes policy effectiveness, trade‑off analysis, and international collaboration to protect its citizens (Josephine Teo)
EXPLANATION
Teo highlights that Singapore focuses on evaluating policy effectiveness, understanding trade‑offs, and collaborating internationally through bodies like the OECD and AI Safety Connect to safeguard its population.
EVIDENCE
She explained that policymakers must assess whether policies are effective and understand trade-offs, citing the need for research, testing, and international collaboration via the OECD, AI Safety Connect, and ICI as essential for building trust and safety [110-119][140-144].
MAJOR DISCUSSION POINT
Policy effectiveness and international cooperation
J
Jann Tallinn
4 arguments143 words per minute517 words216 seconds
Argument 1
Effective prohibition of superintelligence hinges on transparent disclosure of lab capabilities (Jann Tallinn)
EXPLANATION
Tallinn argues that any prohibition on superintelligent AI must be based on clear, public disclosure of what labs can achieve, ensuring scientific consensus and public buy‑in.
EVIDENCE
He referenced the Future of Life Institute statement calling for a prohibition until there is broad scientific consensus and strong public buy-in, emphasizing the need for transparency [203-204].
MAJOR DISCUSSION POINT
Transparency as basis for prohibition
Argument 2
Private investors now have limited sway over leading AI firms; market forces dominate (Jann Tallinn)
EXPLANATION
Tallinn observes that leading AI companies have grown beyond the influence of private investors, especially as they approach IPOs, reducing investors’ ability to affect safety decisions.
EVIDENCE
He stated that investors don’t play much of a role anymore because leading AI companies are above the level where private investors can influence them, and they will soon IPO [232-233].
MAJOR DISCUSSION POINT
Diminished investor influence
Argument 3
Massive funding streams can be harnessed to pressure companies toward safety if public demand is strong (Jann Tallinn)
EXPLANATION
Tallinn points out that large financial flows into AI can be leveraged to enforce safety, provided there is sufficient public pressure and signatures supporting regulation.
EVIDENCE
He noted that the Future of Life Institute statement has gathered over 130,000 signatures, indicating public pressure, and that the trillions flowing into AI actually make it easier to govern if there is enough demand [224-227].
MAJOR DISCUSSION POINT
Using funding pressure for safety
DISAGREED WITH
Mathias Cormann
Argument 4
Development of superintelligent AI should be halted until broad scientific consensus and strong public buy‑in are achieved (Jann Tallinn)
EXPLANATION
Tallinn reiterates the call for a moratorium on superintelligence development until the scientific community reaches consensus on safety and the public demonstrates strong support.
EVIDENCE
He restated the Future of Life Institute’s call for prohibition until there is broad scientific consensus and strong public buy-in, emphasizing the need for such conditions before proceeding [203-206].
MAJOR DISCUSSION POINT
Moratorium until consensus and buy‑in
DISAGREED WITH
Mathias Cormann
O
Osama Manzar
1 argument72 words per minute193 words159 seconds
Argument 1
AI safety must focus first on protecting people and preserving human intelligence before expanding AI capabilities (Osama Manzar)
EXPLANATION
Manzar stresses that the primary goal of AI safety is to safeguard human beings and human intelligence, likening it to protecting passengers before teaching them how to think, and calls for strong safety guards and ethical policies.
EVIDENCE
He argued that the entire safety aspect of AI should be about saving people before teaching them how to think, emphasizing the need to save human intelligence from artificial intelligence and embed safety guards, ethics, and policy playbooks [272-276].
MAJOR DISCUSSION POINT
Prioritizing human protection over AI advancement
Agreements
Agreement Points
AI safety requires worldwide coordination because harms cross borders
Speakers: Stuart Russell, Nicolas Miailhe, Mathias Cormann, Eileen Donahoe
AI safety requires worldwide coordination because harms cross borders (Stuart Russell)
All speakers stress that AI-related risks such as psychological damage or loss of human control are not confined to any single country and therefore demand global coordination. Russell explicitly notes the cross-border nature of harms and the need to coordinate [44-46]; Miailhe describes semi-annual global convenings at AI summits and the UN to accelerate safety discussions [11-15]; Cormann frames the governance challenge as requiring global coordination to ensure only safe systems are built [42-44]; Donahoe calls for deeper international diplomacy on extreme risks and highlights the role of middle powers in bridging gaps [61-64].
POLICY CONTEXT (KNOWLEDGE BASE)
This view echoes the Policymaker’s Guide to International AI Safety Coordination, which stresses that AI harms cross borders and require global coordination [S62], and aligns with IGF discussions on the global nature of cybersecurity threats [S64] and the need for worldwide readiness [S60].
Inclusive, evidence‑based consensus building and stakeholder inclusion are essential for trustworthy AI
Speakers: Mathias Cormann, Nicolas Miailhe, Gobind Singh Deo
Building consensus through inclusive, evidence‑based processes is key to effective governance (Mathias Cormann) Trust is built through inclusion of governments, industry, civil society, and technical experts (Mathias Cormann) Capacity‑building and trust‑building exercises are vital for preparing stakeholders (Nicolas Miailhe) Enforcement agencies and institutional capacity are essential for implementing standards across ASEAN (Gobind Singh Deo)
Cormann argues that trust and effective governance arise from bringing together all relevant actors and grounding decisions in objective evidence. Miailhe adds that AI Safety Connect conducts capacity-building and trust-building activities behind closed doors. Gobind stresses that without agencies to enforce standards, consensus remains ineffective. Together they underline inclusion, evidence, and institutional capacity as pillars of trustworthy AI [77-80][84-86][15-16][162-166].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder, evidence-based consensus building is highlighted in the IGF 2023 report on evolving AI governance [S55] and reinforced by the high-level consensus on AI governance principles [S65]; standards bodies also stress inclusive processes [S56], and recent analyses note collaborative approaches as essential for trustworthy AI [S76].
Coordinated transparency and incident reporting, potentially via an international incident response centre, are critical infrastructure for frontier AI safety
Speakers: Mathias Cormann, Eileen Donahoe
Coordinated transparency and incident reporting are critical; an international incident response centre should be pursued (Mathias Cormann) Current governance is fragmented; policymakers must close gaps and create binding incentives (Eileen Donahoe)
Cormann identifies coordinated transparency and incident reporting as the most critical piece of frontier-AI safety infrastructure and proposes an international incident response centre to share failure data without penalising reporters. Donahoe highlights the fragmented, un-harmonised governance landscape and asks whether an incident response centre should be a priority, indicating shared concern for a coordinated reporting mechanism [91-96][73-76].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s AI incident reporting guidelines propose a centralized database for critical-infrastructure incidents, exemplifying coordinated transparency mechanisms [S71]; similar calls for transparency of frontier models and investment in trust-and-safety institutions appear in recent policy briefs [S61]; the AI Standards Hub advocates an international incident response centre as core infrastructure [S56][S57].
Middle powers and regional bodies can lead AI safety by pooling resources, normative influence and regional coordination
Speakers: Eileen Donahoe, Gobind Singh Deo, Josephine Teo
Middle powers can leverage pooled resources and normative influence to steer AI safety (Eileen Donahoe) ASEAN AI Safety Network exemplifies regional coordination to align standards (Gobind Singh Deo) Singapore can bridge coordination gaps despite limited jurisdiction by translating science into effective policy (Josephine Teo)
Donahoe argues that middle powers, through pooled resources and normative influence, can shape global AI practices. Gobind points to the ASEAN AI Safety Network as a concrete regional coordination model. Teo explains how Singapore, though a smaller state, can translate scientific insights into policy and work through international bodies to protect its citizens. All three emphasize the strategic role of non-superpower states in global AI governance [62-64][101-102][152-155][156-157][104-108][110-112].
POLICY CONTEXT (KNOWLEDGE BASE)
Regional coordination is advocated by the Global South AI Safety Research Network, which urges middle powers to pool resources and normative influence [S63]; the AI and International Peace and Security report highlights regional cooperation mechanisms for AI governance [S75]; discussions on assurance gaps stress the role of regional bodies in the Global South [S58]; IGF panels also note regional capacity building as a pathway for leadership [S55].
Capacity building, trust building, and investment in safety tools are necessary to prepare stakeholders for frontier AI
Speakers: Nicolas Miailhe, Josephine Teo, Sangbu Kim, Mathias Cormann
Capacity‑building and trust‑building exercises are vital for preparing stakeholders (Nicolas Miailhe) Robust research, testing, and interoperable standards are required to turn scientific insights into policy (Josephine Teo) Safety architecture must be embedded from the design stage, with dedicated investment in protection mechanisms (Sangbu Kim) Open‑source safety tools and metrics are needed to make trustworthy AI practical (Mathias Cormann)
Miailhe highlights ongoing capacity-building and trust-building activities. Teo stresses the need for extensive research, testing, simulations and interoperable standards to translate science into policy. Kim calls for safety-by-design and investment in protective mechanisms, while Cormann promotes open-source safety tools to operationalise trustworthy AI. Together they underscore a multi-layered approach of capacity development, investment and tooling [15-16][110-136][178-182][254-255][98-99].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity and trust building are repeatedly called for in IGF multi-stakeholder governance recommendations [S55]; the assurance-gap discussion emphasizes investment in safety tools for developing regions [S58]; policy briefs call for investment in trust-and-safety infrastructure for frontier AI [S61]; and broader analyses underline the need for capacity building to prepare stakeholders [S76].
Periodic pauses, testing and a slowdown of AI development are needed to ensure safety and public trust
Speakers: Mathias Cormann, Jann Tallinn
Periodic pauses for testing, auditing, and monitoring are necessary to maintain public trust (Mathias Cormann) slow down we really need to slow down (Jann Tallinn)
Cormann recommends occasional slow-downs to test, monitor, audit and share information, building confidence that systems respect fundamental rights. Tallinn echoes this by explicitly calling for a slowdown of AI development, especially superintelligence efforts. Both converge on the need to temper speed with safety checks [84-86][256-257].
Similar Viewpoints
Both emphasize that AI safety challenges are global and demand coordinated governance mechanisms, whether through broad coordination or specific incident‑reporting infrastructure [44-46][42-44][91-96].
Speakers: Stuart Russell, Mathias Cormann
AI safety requires worldwide coordination because harms cross borders (Stuart Russell) Coordinated transparency and incident reporting are critical; an international incident response centre should be pursued (Mathias Cormann)
Both see regional or middle‑power initiatives as essential pathways to achieve coordinated AI governance and to operationalise standards across jurisdictions [62-64][152-155][156-157].
Speakers: Eileen Donahoe, Gobind Singh Deo
Middle powers can leverage pooled resources and normative influence to steer AI safety (Eileen Donahoe) ASEAN AI Safety Network exemplifies regional coordination to align standards (Gobind Singh Deo)
Both stress that safety must be built into AI from the outset through rigorous research, testing and investment, and that policy must translate scientific evidence into actionable safeguards [104-108][110-112][178-182].
Speakers: Josephine Teo, Sangbu Kim
Singapore can bridge coordination gaps despite limited jurisdiction by translating science into effective policy (Josephine Teo) Safety architecture must be embedded from the design stage, with dedicated investment in protection mechanisms (Sangbu Kim)
Both agree that a deliberate slowdown of AI development, accompanied by testing and monitoring, is essential to safeguard public trust and prevent unsafe outcomes [84-86][256-257].
Speakers: Mathias Cormann, Jann Tallinn
Periodic pauses for testing, auditing, and monitoring are necessary to maintain public trust (Mathias Cormann) slow down we really need to slow down (Jann Tallinn)
Unexpected Consensus
Massive funding streams can be leveraged as a lever for AI safety
Speakers: Jann Tallinn, Sangbu Kim
Massive funding streams can be harnessed to pressure companies toward safety if public demand is strong (Jann Tallinn) Safety architecture must be embedded from the design stage, with dedicated investment in protection mechanisms (Sangbu Kim)
While Tallinn focuses on using the trillions flowing into AI as a pressure point for safety, Kim emphasizes the need for upfront investment in safety-by-design. Both converge on the insight that financial resources, whether through public pressure or direct investment, are pivotal levers for achieving AI safety-a linkage not explicitly drawn elsewhere in the discussion [224-227][254-255].
POLICY CONTEXT (KNOWLEDGE BASE)
Large-scale funding for AI safety is highlighted in reports on AI safety institutes leveraging substantial research investments [S54] and in statements by AI leaders such as Yoshua Bengio urging massive safety research funding [S74]; investor-focused analyses stress the importance of consistent, predictable policy environments for channeling finance [S68] and note challenges in blended finance for AI safety projects [S69].
Overall Assessment

There is strong consensus that AI safety is a global challenge requiring coordinated governance, inclusive evidence‑based consensus building, and robust capacity‑building. Middle powers and regional bodies are seen as pivotal actors, and concrete infrastructure such as incident‑reporting mechanisms and open‑source safety tools are widely endorsed. Participants also agree on the need for periodic slow‑downs, testing and investment in safety‑by‑design.

High consensus on the need for global coordination, inclusive governance, capacity building and investment; moderate consensus on specific mechanisms (incident response centre) and on the role of funding as a lever. This broad agreement provides a solid foundation for advancing coordinated policy initiatives and allocating resources toward practical safety tools and regional cooperation.

Differences
Different Viewpoints
Role of private investors in AI safety governance
Speakers: Eileen Donahoe, Jann Tallinn
What would it take to bring investors meaningfully into the safety conversation? (Eileen Donahoe) Investors don’t play much of a role anymore because the leading AI companies are above the level where private investors can influence them (Jann Tallinn)
Eileen asks how investors can be engaged to shape safety incentives, implying they could have a meaningful role [228-230]. Jann counters that investors now have little influence over leading AI firms, especially as they approach IPOs, suggesting they cannot be a lever for safety [232-233].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent commentary notes that companies often limit engagement beyond policy references, shifting responsibility to private investors and raising questions about their governance role [S67]; investor-focused literature stresses the need for consistent regulatory frameworks to enable effective investor participation [S68]; blended-finance discussions also highlight investor influence on AI safety initiatives [S69].
Preferred mechanism to slow or halt risky AI development
Speakers: Mathias Cormann, Jann Tallinn
Occasionally we should pause, test, monitor, audit, share information and invest in building confidence (Mathias Cormann) Development of superintelligent AI should be halted until broad scientific consensus and strong public buy‑in are achieved (Jann Tallinn) Massive funding streams can be harnessed to pressure companies toward safety if public demand is strong (Jann Tallinn)
Cormann advocates periodic pauses for testing and auditing as a pragmatic way to maintain trust [84-86]. Tallinn calls for a more decisive prohibition on superintelligence until consensus and public buy-in are reached, and argues that large funding can be used as leverage if there is sufficient public pressure [203-206][224-227]. The two propose different primary levers-operational pauses versus a moratorium tied to consensus.
How to ensure compliance with AI safety standards: voluntary reporting vs enforced institutions
Speakers: Mathias Cormann, Gobind Singh Deo
Coordinated transparency and incident reporting are critical; an international incident response centre should be pursued (Mathias Cormann) Enforcement agencies and institutional capacity are essential for implementing standards across ASEAN (Gobind Singh Deo)
Cormann emphasizes building a voluntary, transparent incident reporting framework and a future international response centre to share failures without penalising reporters [91-96]. Gobind stresses that without dedicated agencies to enforce standards, regulations remain paper-based and ineffective, calling for institutional mechanisms to ensure compliance [162-166]. The disagreement lies in reliance on voluntary transparency versus mandatory enforcement structures.
POLICY CONTEXT (KNOWLEDGE BASE)
India’s mandatory AI incident reporting guidelines illustrate a move toward enforced compliance mechanisms [S71]; policy analyses advocate investment in robust institutions to oversee safety standards [S61]; and standards bodies discuss the balance between voluntary reporting and formal enforcement in international frameworks [S56].
Unexpected Differences
Investor influence versus irrelevance
Speakers: Eileen Donahoe, Jann Tallinn
What would it take to bring investors meaningfully into the safety conversation? (Eileen Donahoe) Investors don’t play much of a role anymore because the leading AI companies are above the level where private investors can influence them (Jann Tallinn)
Eileen treats investors as a potentially powerful lever for safety governance, a view not commonly emphasized in high-level AI policy discussions. Tallinn’s dismissal of investor influence was unexpected, revealing a stark contrast in perceived stakeholder relevance [228-230][232-233].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on investor relevance note that while investors can leverage funding, inconsistent policy environments may render their influence marginal, as highlighted in analyses of corporate-government dynamics [S67] and investor consistency requirements [S68].
Philosophical framing of AI safety as protecting human intelligence
Speakers: Osama Manzar, Other panelists
The entire safety aspect of AI should be about saving people before you teach them how to think; we must save human intelligence from artificial intelligence (Osama Manzar) Other speakers focus on technical, regulatory, and coordination measures without invoking a fundamental protection of human intelligence
Manzar’s framing of AI safety as a moral imperative to protect human intelligence is a broader, more existential stance than the predominantly technical and policy-oriented perspectives of the other speakers, representing an unexpected divergence in the conceptualization of AI safety [272-276].
Overall Assessment

The panel largely concurs on the necessity of coordinated AI governance, but diverges on the mechanisms to achieve safety—ranging from voluntary transparency and incident reporting, to enforced institutional compliance, to periodic pauses, to outright prohibitions. A notable unexpected split concerns the perceived role of private investors, with one speaker viewing them as a potential lever and another dismissing their influence. These disagreements highlight the challenge of aligning diverse stakeholder perspectives into a coherent global safety strategy.

Moderate to high. While there is broad consensus on the goal of AI safety, the lack of agreement on concrete levers—investor engagement, enforcement versus voluntary reporting, and the preferred slowdown mechanism—suggests that achieving unified policy action will require substantial negotiation and compromise.

Partial Agreements
All speakers agree that coordinated governance—whether global, regional, or national—is essential to manage AI risks. However, they differ on the scale and mechanism: Russell calls for worldwide coordination; Gobind focuses on ASEAN regional mechanisms; Cormann stresses inclusive consensus building; Eileen highlights the need for binding incentives; Teo emphasizes science‑to‑policy translation within limited jurisdiction [44-46][77-84][56-60][152-155][104-112].
Speakers: Stuart Russell, Mathias Cormann, Eileen Donahoe, Gobind Singh Deo, Josephine Teo
AI safety requires worldwide coordination because harms cross borders (Stuart Russell) Building consensus through inclusive, evidence‑based processes is key (Mathias Cormann) Current governance is fragmented; policymakers must close gaps and create binding incentives (Eileen Donahoe) ASEAN AI Safety Network exemplifies regional coordination to align standards (Gobind Singh Deo) Singapore can bridge coordination gaps by translating science into effective policy (Josephine Teo)
Both agree that practical tools and design practices are needed to embed safety, but Cormann focuses on open‑source tool catalogs, while Kim emphasizes financial and partnership support to embed safety architecture from the design stage [98-99][178-182].
Speakers: Mathias Cormann, Sangbu Kim
Open‑source safety tools and metrics are needed to make trustworthy AI practical (Mathias Cormann) The World Bank can help Global South nations design safety‑by‑design AI systems (Sangbu Kim)
Takeaways
Key takeaways
AI safety risks are global and require coordinated international governance. Current AI governance is fragmented; inclusive, evidence‑based consensus building is essential. Middle powers and global‑majority states can leverage pooled resources and normative influence to shape safety standards. Transparency, incident reporting, and a potential international incident‑response centre are critical infrastructure for frontier AI safety. Open‑source safety tools, interoperable standards, and rigorous testing/simulation are needed to translate scientific insights into enforceable policy. Institutional capacity and enforcement agencies are necessary for implementing standards, especially in regional bodies like ASEAN. The World Bank can help Global South countries adopt safety‑by‑design practices through partnerships with advanced economies. There is a call for periodic pauses or slow‑downs in AI development to allow testing, auditing, and public trust building. Investor influence on leading AI firms is diminishing; public pressure and massive funding streams may be used to enforce safety commitments. Protecting human beings and preserving human intelligence must be prioritized over rapid AI advancement.
Resolutions and action items
AI Safety Connect will continue its semi‑annual global convenings and publish the results of the closed‑door scientific dialogue. The OECD will expand coordinated transparency and incident‑reporting mechanisms, building toward an international AI incident‑response centre. The OECD AI Policy Observatory will continue to collect and share data on AI governance practices worldwide. An open call for open‑source safety and evaluation tools will be maintained, with tools catalogued on the OECD.ai platform. Singapore will refresh the AI safety research priorities (second edition of the Singapore Consensus) and advance practical testing tools within the next 12 months. Malaysia will operationalise the ASEAN AI Safety Network, strengthen enforcement agencies, and finalize its AI National Action Plan and AI Governance Bill by 2026. The World Bank will facilitate safety‑by‑design collaborations between developing‑country clients and advanced‑economy partners, including red‑team exercises. Panelists agreed to prioritize institutionalising AI safety governance structures at national and regional levels within the next year. A call was made to the United Nations General Assembly to host the fourth edition of AI Safety Connect in New York.
Unresolved issues
Specific design, funding, and legal framework for an international AI incident‑response centre remain undefined. How to create binding incentives for AI developers and deployers across jurisdictions without stifling innovation. Mechanisms for effectively bringing private investors into the safety governance conversation were not agreed upon. Details of how a prohibition on superintelligent AI development could be enforced in practice were not resolved. The exact process for harmonising ASEAN enforcement agencies and ensuring consistent implementation of standards across member states remains open. Methods for measuring policy effectiveness and trade‑offs, especially in rapidly evolving AI contexts, were discussed but not concretely specified.
Suggested compromises
Adopt periodic, limited pauses in AI development to allow for testing, auditing, and public transparency before proceeding. Use coordinated incident‑reporting as a voluntary but widely adopted step, building trust while avoiding punitive legal exposure for reporters. Middle powers lead on normative frameworks and resource pooling, allowing larger AI‑producing nations to adopt these standards gradually. Encourage open‑source safety tools and shared metrics as a common baseline, reducing duplication and fostering collaborative improvement.
Thought Provoking Comments
Middle powers and global majority states can’t be seen as peripheral actors; leading from the middle may turn out to be a more powerful approach than previously anticipated.
She reframes the AI governance narrative away from a binary superpower vs. rest dynamic, highlighting the strategic agency of middle‑income countries and suggesting a new diplomatic lever for safety coordination.
This comment shifted the discussion toward the role of non‑superpower nations, prompting panelists from Singapore, Malaysia and the World Bank to discuss concrete ways their regions can influence standards, and set the stage for the later focus on regional cooperation (e.g., ASEAN AI Safety Network).
Speaker: Eileen Donahoe
Trust is built through inclusion and objective evidence; occasionally we should pause, test, monitor, audit, share information, and invest in building confidence that systems respect fundamental rights.
He links the abstract notion of ‘trust’ to concrete procedural steps (pausing, transparency, incident reporting) and frames these as prerequisites for public acceptance, moving the conversation from high‑level principles to actionable governance mechanisms.
His call for pauses and incident‑reporting infrastructure sparked subsequent remarks about coordinated transparency (e.g., OECD’s incident reporting framework) and reinforced the panel’s focus on building practical safety tools, influencing the later emphasis on an international incident response centre.
Speaker: Mathias Cormann
Translating scientific knowledge into policy requires rigorous testing, simulations, and interoperable standards—just as aviation safety demands evidence‑based distance rules for aircraft take‑offs and landings.
She uses a concrete aviation analogy to illustrate the gap between scientific understanding and policy implementation, emphasizing the need for evidence‑based standards and cross‑jurisdictional interoperability.
The analogy deepened the discussion on how technical research can be operationalised, leading other speakers (e.g., Gobind Singh Deo) to stress the necessity of enforcement agencies and standardized testing regimes.
Speaker: Josephine Teo
Standards, regulations, and legislation are ineffective without an agency that can enforce them; otherwise they remain strong on paper but have no real impact.
He highlights a critical missing piece in AI governance—implementation capacity—shifting the focus from rule‑making to institutional capability and sustainability.
This point redirected the conversation toward building enforcement bodies within ASEAN and other regional frameworks, reinforcing the earlier call for institutionalisation and influencing the panel’s concluding recommendations about sustainable structures.
Speaker: Gobind Singh Deo
The biggest risk lies in the labs of top AI companies; a prohibition on superintelligence development should only happen after broad scientific consensus and strong public buy‑in, and political pressure can make such a prohibition feasible.
He brings a stark, lab‑centric perspective that contrasts with the policy‑focused remarks of others, introducing the idea of an outright prohibition and linking it to public mobilisation and political leverage.
His emphasis on a prohibition and the limited role of investors prompted a brief exchange on investor influence, and reinforced the urgency expressed by other speakers about slowing down development and increasing transparency.
Speaker: Jann Tallinn
Ensuring AI systems operate safely and ethically is partly a technical challenge and partly a governance challenge; global coordination is essential because harms cross borders.
He succinctly frames the dual nature of the problem and underscores the necessity of international coordination, setting a conceptual foundation for the entire panel.
This framing guided the subsequent questions from Eileen Donahoe and anchored the panel’s focus on coordination mechanisms, influencing the direction of the discussion toward global governance structures.
Speaker: Stuart Russell
AI is like a sphere that can penetrate any shield, but we can also build stronger protective shields using AI itself; the solution lies in close collaboration between developing and advanced economies.
He uses a vivid metaphor to illustrate the paradox of AI as both threat and defence, emphasizing the need for collaborative learning and co‑development of safety tools across capacity levels.
The metaphor reinforced the theme of partnership between high‑ and low‑capacity countries, supporting earlier points about middle‑power agency and prompting the panel to consider concrete collaborative models for safety tool development.
Speaker: Sangbu Kim
Overall Assessment

The discussion was shaped by a handful of pivotal insights that moved it from a generic acknowledgment of AI risks to a nuanced exploration of governance levers. Stuart Russell’s framing of the dual technical‑governance challenge set the agenda, while Eileen Donahoe’s spotlight on middle‑power agency broadened the geopolitical lens. Mathias Cormann’s call for trust‑building pauses and incident reporting introduced concrete procedural tools, which were elaborated by Mathias Cormann and later reinforced by Gobind Singh Deo’s insistence on enforcement capacity. Josephine Teo’s aviation analogy and Sangbu Kim’s sphere‑shield metaphor grounded abstract concepts in real‑world analogies, prompting concrete discussions about standards, testing, and collaborative safety tool development. Jann Tallinn’s stark warning about lab‑level risks and the feasibility of a prohibition injected urgency and highlighted the limits of market‑based solutions, leading to a brief debate on investor influence. Collectively, these comments redirected the conversation toward actionable, inclusive, and internationally coordinated governance mechanisms, culminating in a consensus that the coordination gap is real but bridgeable through inclusive institutions, transparent reporting, and sustained political pressure.

Follow-up Questions
What are the key lessons learned from building consensus on AI safety frameworks and what is the most critical piece of coordinated frontier AI safety infrastructure to build now, such as an international incident response center?
Understanding past successes and pinpointing the most needed infrastructure will help shape effective global coordination and rapid response to AI incidents.
Speaker: Eileen Donahoe (to Mathias Cormann)
What role can Singapore and other middle powers play in bridging the coordination gap and keeping scientific and safety channels open, and what is the most important step they can take in the next 12 months to establish a shared minimum understanding of frontier safety?
Middle powers have unique diplomatic leverage; identifying concrete actions can enable them to steer global AI governance despite limited domestic jurisdiction over frontier AI.
Speaker: Eileen Donahoe (to Josephine Teo)
What lessons can other middle powers draw from Malaysia’s experience with the ASEAN AI Safety Network, and what concrete steps should ASEAN take in the next 12–18 months to move beyond aspirational goals?
Malaysia’s dual‑track approach offers a potential model; clarifying actionable steps will help the region operationalize AI safety coordination.
Speaker: Eileen Donahoe (to Gobind Singh Deo)
How can the World Bank help Global South countries transition from passive recipients of frontier AI to active shapers of safety and reliability requirements before large‑scale deployment?
The World Bank’s financing and technical assistance could be pivotal, but specific mechanisms for capacity‑building, standards adoption, and risk assessment need definition.
Speaker: Eileen Donahoe (to Sangbu Kim)
What would an effective prohibition on superintelligent AI development look like in practice, and how could it be enforced?
A clear, enforceable prohibition is a cornerstone of the Future of Life Institute’s stance; detailing its practical design is essential for policy implementation.
Speaker: Eileen Donahoe (to Jann Tallinn)
What would it take to bring investors meaningfully into the AI safety conversation?
Investors shape incentives for AI developers; identifying mechanisms (e.g., safety‑linked financing terms, disclosure requirements) could align capital flows with safety goals.
Speaker: Eileen Donahoe (follow‑up to Jann Tallinn)
What should be prioritized in the next 12–24 months to enhance AI safety and security globally?
A short‑term priority list will guide governments, industry, and multilateral bodies in allocating resources and legislative effort before capabilities outpace governance.
Speaker: Eileen Donahoe (to the panel)
How can coordinated transparency and incident reporting frameworks be standardized across jurisdictions to enable an international AI incident response center?
Standardized reporting is prerequisite for a global response hub; research is needed on data sharing protocols, legal protections, and interoperability.
Speaker: Mathias Cormann (implied)
What are the most effective methods for refreshing AI safety research priorities to keep pace with rapid technological advances?
The Singapore consensus quickly becomes outdated; a systematic, periodic review process is required to ensure research agendas remain relevant.
Speaker: Josephine Teo (implied)
What practical testing tools and evaluation metrics are needed to give developers assurance of safety before deployment?
Guidelines alone are insufficient; concrete, open‑source testing suites would enable developers to validate safety claims across diverse contexts.
Speaker: Josephine Teo (implied)
How can institutions be built or strengthened within ASEAN to enforce AI safety standards and sustain long‑term governance?
Enforcement agencies are essential for translating standards into impact; research should explore institutional design, funding, and cross‑border coordination.
Speaker: Gobind Singh Deo (implied)
What financing models can ensure adequate investment in AI safety measures, especially for low‑capacity countries?
Developing nations need dedicated funding streams for safety; exploring grants, blended finance, and risk‑sharing mechanisms is critical.
Speaker: Sangbu Kim (implied)
What mechanisms can increase transparency of AI companies’ internal knowledge to support global slowdown efforts?
Greater openness about development roadmaps and risk assessments could create political pressure for a slowdown; viable transparency frameworks must be studied.
Speaker: Jann Tallinn (implied)
How can open‑source safety and evaluation tools be curated, maintained, and adopted globally?
A centralized catalog (e.g., OECD.ai) is a start, but sustainable governance, community contributions, and integration into regulatory processes need further investigation.
Speaker: Mathias Cormann (implied)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.