Policymaker’s Guide to International AI Safety Coordination

20 Feb 2026 17:00h - 18:00h

Policymaker’s Guide to International AI Safety Coordination

Session at a glanceSummary, keypoints, and speakers overview

Summary

The AI Safety Connect summit in New Delhi was convened to confront the accelerating race toward artificial intelligence and the widening gap between rapid technology deployment and adequate safety measures [1-5][6-9][11-15]. Organizers highlighted that AI Safety Connect brings together policymakers, industry, and academia through semi-annual global convenings at AI summits and the UN General Assembly to foster faster, inclusive safety discussions [11-22][24-30]. Stuart Russell of the International Association for Safe and Ethical AI emphasized that ensuring AI systems operate safely is both a technical and governance challenge that requires coordinated international action because harms cross borders [39-46], noting the upcoming second annual conference in Paris as an opportunity for stakeholders to collaborate on these governance issues [36-38].


Eileen Donahoe framed the policy discussion by warning that current AI governance is fragmented, with minimal guardrails and insufficiently binding risk-management processes, creating mixed signals for developers and investors [56-60]. She argued that middle-power and global-majority states can leverage pooled resources, market influence, and regulatory innovation to shape AI safety, and that their active participation is crucial to move from rhetoric to real-world impact [62-66]. Mathias Cormann stressed that trust in AI is built through inclusive, evidence-based processes and that international consistency-exemplified by the OECD’s 2019 AI principles adopted by 50 countries-helps reduce fragmentation and align policy frameworks [77-80][86-88]. He identified coordinated transparency and incident reporting, such as the Hiroshima AI Code of Conduct and the emerging Global Partnership on AI Incident Reporting Framework, as the most critical frontier-AI safety infrastructure to develop now [91-96], and highlighted the OECD’s open-source safety-tool catalog as a means to make trustworthy AI more practicable for developers worldwide [98-99].


Singapore’s Minister Josephine Teo explained that smaller states must translate scientific insights into effective policies, invest in rigorous testing and simulations, and cooperate internationally to create interoperable safety standards [103-110][115-122][132-138][141-146]. Malaysia’s Gobind Singh Deo added that without dedicated agencies to enforce standards, regulations remain paper-only, and that ASEAN needs institutional mechanisms to sustain AI governance across the region [162-166][168-173]. Sangbu Kim of the World Bank argued that developing countries require early-stage capacity building and partnerships with advanced economies to embed safety architecture into AI systems before large-scale deployment [176-182][184-190].


Jann Tallinn warned that the most pressing risk lies in unchecked laboratory races toward superintelligence, calling for a slowdown, greater transparency, and noting that private investors now have limited influence over leading AI firms [210-218][221-226][231-236]. The closing remarks reiterated that the coordination gap in frontier AI safety is real and urgent but can be closed through continued global collaboration, with a call to convene again at the UN General Assembly and the next AI Safety Connect edition [260-264].


Keypoints

Major discussion points


Urgent need for global coordination on AI safety, especially involving middle-power and “global-majority” states.


Eileen frames the problem as fragmented, non-binding risk-management across jurisdictions and stresses the role of middle powers [55-63]. Stuart Russell stresses that AI harms cross borders and require coordinated governance [44-46]. Mathias Cormann adds that trust is built through inclusive, evidence-based dialogue among governments, industry and civil society [77-84].


Proposed concrete governance infrastructure: transparent incident reporting, an international incident-response centre and open-source safety tools.


Cormann describes the “coordinated transparency and incident reporting” framework, the Hiroshima Code of Conduct reporting system, and the emerging Global Partnership on AI (GPI) incident-reporting common framework that could evolve into an international response centre [91-98]. Earlier, the panel raised the idea of an international incident-response centre as a priority [75-76].


National and regional initiatives as models for collective action.


Singapore’s Minister Josephine Teo explains the need to translate scientific knowledge into policy, invest in testing, and develop interoperable standards, while highlighting OECD-led efforts such as the Global Partnership on AI [103-146]. Malaysia’s Minister Gobind Singh Deo outlines the ASEAN AI Safety Network, the importance of enforceable agencies, and the need for sustained regional institutions [152-173][158-173].


Industry dynamics, superintelligence risk, and the limited influence of investors.


Jann Tallinn warns that the primary danger lies in the “cut-throat race” within leading AI labs, calls for a slowdown, and notes the difficulty of influencing these firms through investors, who are now largely sidelined [207-226][231-236].


Immediate 12-month priorities: refresh research agendas, develop practical testing tools, and institutionalise AI-safety governance.


Eileen asks panelists to identify actions for the next year [236-239]. Josephine Teo stresses updating the Singapore consensus and advancing testing tools [240-249]. Gobind Singh Deo stresses building sustainable institutions to embed the conversation [253]. Cormann underscores the need for a comprehensive, rapid catch-up across all fronts [251].


Overall purpose / goal


The discussion was convened to surface and narrow the “coordination gap” in frontier AI safety, to showcase existing international frameworks (OECD principles, Singapore consensus, AI Safety Connect), and to generate concrete, near-term actions that policymakers, middle-power states, and multilateral bodies can take to shape a trustworthy, globally coordinated AI governance regime.


Overall tone


The tone begins with a sense of urgency and alarm about rapid AI progress outpacing safety measures [1-4][57-60]. It quickly shifts to a collaborative, solution-oriented mood as participants share existing initiatives and propose concrete infrastructure [77-98][103-146]. Mid-conversation, a more cautionary and even confrontational tone emerges when discussing the “cut-throat race” in labs and the need for a slowdown [210-218][207-226]. The closing remarks return to an optimistic yet urgent call for coordinated action and institutionalisation [260-264][266-277].


Speakers

Osama Manzar


Area of expertise / role: Local anchor and co-organiser for AI Safety Connect; works with the Digital Empowerment Foundation on grassroots engagement.


Title / affiliation: Representative of the Digital Empowerment Foundation (co-organiser)[S2]


Jann Tallinn


Area of expertise: AI safety advocacy, AI governance, philanthropy, and technology investment.


Title / affiliation: Founding engineer of Skype; early investor in DeepMind and Anthropic; co-founder of the Future of Life Institute[S4]


Stuart Russell


Area of expertise: Artificial intelligence research, AI safety, and ethics.


Title / affiliation: Professor of Computer Science, University of California, Berkeley; Director of the International Association for Safe and Ethical AI (ICI)[S5]


Gobind Singh Deo


Area of expertise: AI policy, regulatory frameworks, and regional coordination.


Title / affiliation: Malaysian Minister (Minister for Digital Development and Information) and leader of Malaysia’s 2025 ASEAN chairmanship[S8]


Mathias Cormann


Area of expertise: International policy coordination, AI governance, and standards development.


Title / affiliation: Secretary-General of the Organisation for Economic Co-operation and Development (OECD)[S11]


Josephine Teo


Area of expertise: Digital development, AI policy, and regulatory implementation.


Title / affiliation: Minister for Digital Development and Information, Government of Singapore[S13]


Nicolas Miailhe


Area of expertise: AI safety strategy, convening global AI safety stakeholders, and capacity building.


Title / affiliation: Founder and lead of AI Safety Connect (organiser of the AI Safety Connect summit)


Sangbu Kim


Area of expertise: Digital infrastructure, AI safety implementation in development contexts.


Title / affiliation: Vice President for Digital and AI, World Bank[S18]


Eileen Donahoe


Area of expertise: AI governance, digital freedom, and human rights advocacy.


Title / affiliation: Founder and Managing Partner, Sympathico Ventures; former U.S. Special Envoy and Coordinator for Digital Freedom; former Ambassador to the UNHCR[S21]


Additional speakers:


Prime Minister Dick Schuh – Prime Minister of the Netherlands (mentioned as a guest speaker delivering a special address).


Mattias Korman / Matthias Korman – Referred to in the transcript as “His Excellency Matthias Korman”; appears to be the same individual as Mathias Cormann, Secretary-General of the OECD (possible typographical variation).


Full session reportComprehensive analysis and detailed insights

The summit opened with Nicolas Miailhe warning that the race toward artificial intelligence has shifted from a theoretical pursuit to a massive, financially-driven endeavour, with billions – possibly trillions – of dollars being poured into frontier AI research while safety measures lag behind [1-4]. He framed AI Safety Connect as a response to this imbalance, describing it as a platform that “helps shape the frontier AI safety and secure agenda” and that “encourages global majority engagement into frontier AI safety” [6-9]. The organisation convenes semi-annual global meetings at major AI summits and the UN General Assembly, aiming to accelerate safety discussions, build capacity and conduct closed-door trust-building exercises [10-15]; AI Safety Connect also hosted closed-door scientific dialogues with senior industry leaders, the findings of which will be published soon [15-16]. The New Delhi week featured a full day of panels, solution demonstrations and a closed-door workshop, even hosting Prime Minister Dick Schuh of the Netherlands for a special address on leadership in AI safety [16-22]. Miailhe thanked co-hosts – the International Association for Safe and Ethical AI (ICI) directed by Professor Stuart Russell and the Digital Empowerment Foundation – as well as sponsors such as Sympathico Ventures, the Future of Life Institute, Ima and Yann, and the Mindero Foundation, before introducing the panel of senior policymakers [23-30].


Stuart Russell then introduced the International Association for Safe and Ethical AI (ICI), a “global, democratic, scientific and professional society” with several thousand members and approaching 200 affiliate organizations [33-35] (Russell humorously noted that ICI is “the world’s worst acronym”). He announced the second annual ICI conference in Paris, already attracting over 1 300 registrants [36-38]. Russell distinguished the technical challenge of building safe systems from the governance challenge of ensuring only those systems are built, arguing that the latter “requires coordinated international action because the harms… cross borders” [40-46]. He highlighted India’s role as a champion of universal participation, noting that the summit’s location underscored the need for inclusive global coordination [46-47].


Eileen Donahoe framed the policy discussion by pointing out that AI is being deployed “with minimal guardrails” and that existing risk-management processes are “ill-adapted, fragmented across jurisdictions, or insufficiently binding” [56-60]. She warned that this creates an “unharmonised governance landscape” that sends mixed signals to developers, investors and regulators [58-60]. Donahoe then argued that middle-power and “global-majority” states can leverage pooled resources, market influence and regulatory innovation to move AI safety from rhetoric to real-world impact [62-66]. She posed two questions to the panel: what lessons have been learned from building consensus on AI safety frameworks, and whether an international incident-response centre should be a priority [71-76].


Mathias Cormann responded by stressing that trust in AI is built through “inclusion and on the basis of objective evidence” and that bringing together governments, industry, civil society and technical experts is essential [77-80]. He noted the speed mismatch between AI innovation and policy cycles, which creates gaps that must be bridged by occasional pauses for testing, auditing and information sharing [84-86]. Cormann highlighted the OECD’s 2019 AI Principles, now adopted by 50 countries, as the first globally recognised baseline for trustworthy AI [88]. He identified “coordinated transparency and incident reporting” as the most critical frontier-AI safety infrastructure, citing the Hiroshima AI Code of Conduct reporting framework and the emerging Global Partnership on AI (GPI) AI Common Framework, which could evolve into an international AI Incident Response Centre [91-96]. He also described the OECD’s launch of an open-call for open-source safety and evaluation tools, now hosted in the OECD.ai catalogue [98-99]; the OECD AI Policy Observatory provides data and evidence on policy approaches, facilitating peer learning and removing rhetoric [88-90].


Singapore’s Minister Josephine Teo explained that smaller states cannot dictate the rules for AI systems that originate elsewhere, but they can still act by “translating what we know from science into policy” [103-110]. She underscored the need to assess the effectiveness of policies and to understand trade-offs, drawing an analogy with aviation safety where runway separation distances are set only after extensive research, testing and simulation [119-138]. Teo argued that international collaboration – through the OECD’s Global Partnership on AI, AI Safety Connect and ICI – is required to develop interoperable standards and avoid fragmented compliance costs [141-146]. Singapore is preparing a second edition of its AI safety research consensus, expected in the coming months [240-249].


Malaysia’s Minister Gobind Singh Deo described the ASEAN AI Safety Network, established under Malaysia’s 2025 ASEAN chairmanship, as a “dual-track approach” that builds national capacity while leading regional coordination [152-158]. He warned that standards and regulations are ineffective without an agency capable of enforcement, noting that without such institutions “the standards… remain strong on paper but have no impact” [162-166][168-173]. Deo called for the institutionalisation of AI-safety governance across ASEAN, with sustained political will, technical capacity and resources to move beyond aspirational goals [157-173].


Sangbu Kim, Vice-President for Digital and AI at the World Bank, argued that developing-world countries need “early-stage capacity building” and partnerships with high-capacity economies and firms to embed safety architecture from the design stage [176-183]. He described red-team exercises with large tech companies as a way for low-capacity nations to learn how to defend against AI-driven attacks, using the metaphor of AI as a “sphere” that can penetrate any shield, but which can also be countered by a strong “shield” built with AI itself [184-190][196-199]. Kim stressed that continuous collaboration is the only way for the Global South to keep pace with emerging threats [191-194][200-202].


Jann Tallinn, co-founder of the Future of Life Institute, warned that the most serious danger lies in the “cut-throat race” inside leading AI labs to achieve superintelligence [210-214]. He cited recent public calls for a slowdown from AI leaders and argued that political pressure, demonstrated by the 130 000-signature petition, could make a prohibition on superintelligent AI feasible if there is broad scientific consensus and public buy-in [221-226]. Tallinn also asserted that private investors now have little influence over frontier AI firms, which are moving toward IPOs and can replace any missing funding, rendering investor-driven safety interventions largely ineffective [231-236].


In his closing remarks, Nicolas Miailhe reiterated that the “coordination gap” in frontier AI safety is both real and urgent, but “closable” through continued global collaboration [260-263]. He invited participants to the next UN General Assembly session in New York, where the fourth edition of AI Safety Connect will be convened [264-265]. Osama Manzar then broadened the perspective, urging that AI safety be framed as “saving people from AI” and that protecting human intelligence from artificial systems requires strong ethical guardrails and policy playbooks [266-277].


Across the panel, speakers repeatedly emphasized that inclusive, multi-stakeholder coordination is essential for frontier AI safety [10-15][44-46][77-86][56-66]; that transparent incident-reporting and a potential international response centre constitute critical infrastructure [91-96]; that capacity-building partnerships can help low-capacity nations embed safety-by-design (Miailhe, Kim) [15-16][176-183]; and that occasional pauses or a deliberate slowdown are needed to build trust [84-86][210-214]. Disagreements emerged around the role of investors – Donahoe called for their meaningful inclusion [228-230] while Tallinn argued they no longer wield influence [231-236] – and over whether a slowdown should be the primary lever (Tallinn) or whether coordinated transparency and incident-reporting infrastructure should take precedence (Cormann) [208-227][91-96]. A further tension concerned the search for a “silver-bullet” solution; Cormann rejected a single fix in favour of a comprehensive catch-up across technical, regulatory and institutional dimensions [251-254], whereas Tallinn promoted a slowdown as the pivotal measure [208-227].


Key take-aways include: (i) AI safety is lagging behind rapid AI development and requires both technical and governance solutions; (ii) AI Safety Connect provides a semi-annual convening platform to accelerate safety dialogue and capacity-building; (iii) middle-power and global-majority states can shape international AI practices through pooled resources and normative influence; (iv) trust is built through inclusive, evidence-based processes and coordinated transparency; (v) the OECD’s incident-reporting framework and open-source safety-tool catalogue are foundational for a transparent AI governance ecosystem [88-90]; (vi) regional mechanisms such as the ASEAN AI Safety Network illustrate how national capacity-building can be combined with collective coordination; (vii) partnerships with advanced economies and firms enable the Global South to design safety-by-design AI systems; (viii) a deliberate slowdown of frontier AI development, coupled with greater lab transparency, is advocated to manage existential risk; (ix) investors’ influence on AI safety has diminished as leading firms move toward IPOs [231-236]; and (x) dedicated funding is needed to embed safety architecture from the design stage [84-86][254-255].


Concrete actions identified: AI Safety Connect will continue its semi-annual convenings and plan a fourth edition at the UN General Assembly [10-15][264-265]; the OECD is considering expanding coordinated transparency and incident-reporting frameworks, with the possibility of evolving toward an international AI Incident Response Centre [91-96]; the OECD has launched an open-call for open-source safety and evaluation tools, now hosted in the OECD.ai catalogue [98-99]; Singapore will publish a refreshed AI safety research priority list and advance testing-tool development within twelve months [240-249]; ASEAN will operationalise its AI Safety Network, building enforcement agencies and sustaining political will over the next 12-18 months [152-173]; the World Bank will deepen collaborations with high-capacity economies and tech firms to provide red-team and safety-by-design support to Global South clients [176-183]; and all participants called for dedicated funding to embed safety architecture from the design stage [84-86][254-255].


Unresolved issues remain, notably the precise legal authority and governance model for an international AI incident-response centre, how to create enforceable interoperable standards without imposing excessive compliance burdens, mechanisms for effectively involving private investors in safety governance, sustainable financing for regional bodies such as the ASEAN AI Safety Network, and the detailed trade-offs between safety measures and innovation speed for smaller states lacking jurisdiction over AI origins [91-96][84-86][228-230][152-173].


Suggested compromises include: implementing targeted, temporary pauses in AI development to allow testing and auditing before scaling; adopting a phased, coordinated incident-reporting system that protects companies from legal or commercial penalties while sharing near-miss data; leveraging middle-power pooled resources to bridge gaps between fast-moving AI labs and slower policy cycles; combining national capacity-building with regional enforcement mechanisms (e.g., ASEAN) to balance sovereignty with collective safety; and conditioning continued investment and market access on greater transparency from leading AI labs, thereby aligning private incentives with safety goals [84-86][91-96][62-66][157-166][207-226].


Overall, the summit underscored that the coordination gap in frontier AI safety is urgent but can be narrowed through inclusive global governance, transparent incident-reporting, capacity-building partnerships, and, where necessary, a calibrated slowdown of development. The convergence of views across policymakers, regional leaders and technical experts provides a solid foundation for concrete policy initiatives in the coming year. Manzar concluded by urging that AI safety be framed as “saving people from AI”, emphasizing the need to protect human intelligence through robust ethical and policy safeguards [266-277].


Session transcriptComplete transcript of the session
Nicolas Miailhe

that the race towards artificial intelligence is no longer a theoretical pursuit. As billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence, the technology is now advancing rapidly. And safety is not keeping pace with it. There are wonderful opportunities on the other side of this quest. There are also big risks. And so that’s the purpose, that’s the reason AI Safety Connect was founded. AI Safety Connect is there to help shape the frontier AI safety and secure agenda towards what I would frame as commonsensical AI risk management. AI Safety Connect has been founded to encourage global majority engagement into frontier AI safety. And AI Safety Connect, has been connected to showcase Concrete’s governance coordination mechanisms, tools, and solutions.

So how we do this? We convene at each AI summit. So last year we started in Paris, this year in India, next year we’re going to be in Switzerland. But we also convene at the UN General Assembly, right? We need a faster tempo for these safety discussions, so every six months we have this global convening. We also do capacity building, and we also do trust building exercises at times behind closed doors. Well, this week in New Delhi has been an intense one, an impactful one. On Tuesday we had a full day of panels, conference, solution demonstrations, and closed -door workshop discussions on some specific nuts to crack to advance AI safety. We, for example, at the privilege of, hosting Prime Minister Dick Schuh from the Netherlands on stage to deliver a special address on the role of top leadership in advancing AI safety.

We also engage with industry, engage with academia. of India and abroad. So we’re an extremely busy week beside our main event. We had this closed -door discussion that I was mentioning yesterday and today, this closed -door scientific dialogues. We’re going to publish the results soon that brought together senior industry leaders to discuss shared responsibility for AI safety. Well, obviously, none of this would happen without partnership. And we want to thank our co -hosts, the International Association for Safe and Ethical AI and its director, Professor Stuart Russell, to whom I will hand over the floor in a few minutes, and the Digital Empowerment Foundation who is anchoring us at the grassroots here with Osama Manzar,. We’ll close the session later on.

And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moderate that panel and we’re thankful for that. The Future of Life Institute, Ima and Yann, who’s been supporting this effort, and the Mindero Foundation, whose team is here as well with team. And it’s great to have your support and we are thankful for that. So today we’re about to hear from His Excellency Matthias Korman who’s the Secretary General of the OECD We’re going to hear from Her Excellency Minister Josephine Theo who’s the Minister for Digital Development and Information at the Government of Singapore. Thank you for your continuous support, really appreciate that Same for Jann Tallinn who’s the AI investor but also a founding engineer at Skype and the co -founder of the Future of Life Institute And last but not least, we also have Minister Teo who’s going to be with us from Malaysia Minister for Digital Development and Information Thank you Minister as well as Vice President Kim for Digital and AI at the World Bank So an extremely important conversation to have And before we welcome you to the stage I would like to hand over the floor to Professor Stuart Russell to say a few words and to speak about also what’s happening next week in Paris Thank you so much.

Stuart Russell

Thank you very much, Cyrus and Nico. So as Nico mentioned, the International Association for Safe and Ethical AI, or ICI, the world’s worst acronym, is a global, democratic, scientific and professional society. We have several thousand members and approaching 200 affiliate organizations. Our mission is to ensure that AI systems operate safely and ethically for the benefit of humanity. And as Nico mentioned, our second annual conference will take place in Paris starting on Tuesday. It’s still, I think, possible to register, but we’re already up over 1 ,300 people coming. It’s at UNESCO headquarters in Paris. Thank you. So achieving this mission of ensuring… that AI systems operate safely and ethically is partly a technical challenge. How do we even build systems that have that property?

But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this panel is mainly about this second challenge. And I think it’s one on which global coordination is essential because the harms, whether it’s psychological damage to the next generation or loss of human control altogether, those harms cross borders. And we must coordinate to make sure that they don’t happen or they don’t originate anywhere. And it’s, I think, fitting that we are having this summit here in India, which has really, among other things, championed the idea that everyone on Earth should have a say. And so with that, I will hand over to Eileen. Thank you very much.

Nicolas Miailhe

Thank you, Stuart. So Dr. Eileen Donahoe is the founder and managing partner of Sympathico Ventures. She’s also the former U .S. Special Envoy and Coordinator for Digital Freedom and Ambassador to the UNHCR. Eileen? Welcome the speaker on the floor. Please, Your Excellency, Mr. Mattias Korman, Mr. Gobind Singh Deo, Mr. Josephine Teo, and Mr. Jann Tallinn, as well as Mr. Sangbu Kim, join us on stage.

Eileen Donahoe

Okay. Given this remarkable panel and the very short time we have, let me very briefly frame our discussion and get right to our speakers. So we’re here to share. Views on the opportunity for policymakers to impact international AI governance. As the race towards AGI and superintelligence intensifies, AI safety advocates face a compounding challenge. The technology is advancing rapidly and being deployed with minimal guardrails, while the risk management processes that do exist are either ill -adapted to the magnitude of the risk, fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators. The result is an unharmonized governance landscape that fails to shape the behavioral incentives. Of those building and funding frontier AI. Economies, governments, and societies do not respond well to such mixed signals.

While much of the discourse on frontier AI safety has focused on AI superpowers, there’s an urgent need for deeper international diplomacy on the most… extreme risks. At this juncture, middle powers and global majority states can’t be seen as peripheral actors in this landscape. Through pooled resources, market leverage, normative influence, and regulatory innovation, they can shape the direction of global AI practices and safeties. Leading from the middle may turn out to be a more powerful approach than previously anticipated. Whether or not that collective power is exercised now will determine whether international AI governance moves from the rhetorical level to the real -world impact on safety. This panel will aim to identify present -day coordination gaps in the global AI practice and the global market.

We will also look at the role of global AI in international AI safety and highlight practical steps policymakers can take in the coming months to close them. So to our panel, I’ll start with Secretary General Corman. The OECD has done remarkable work over the past decade, developing consensus on the OECD principles, providing a definition of AI systems that has resonated internationally, and playing an international role in operationalizing the Hiroshima International Code of Conduct. Along with those foundations, we now have the International AI Safety Report and the Singapore Consensus on Global AI Safety Research Priorities. With these principles, definitions, and frameworks in mind, two -part question for you. First, what are the key lessons learned from the process of building consensus and then implementing these frameworks?

And then second, looking ahead, what’s the most critical? What’s the most critical piece of coordinated frontier AI safety infrastructure we should be building now? Some have called for an international incident response center, and we’re all curious whether you think that should be a priority and achievable. Just some small, easy questions.

Mathias Cormann

In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is built through inclusion and on the basis of objective evidence. And, you know, I think what we’ve learned over the last few years is that bringing together all the relevant actors, governments, companies, civil society, technical experts, is what we need to do. I mean, each has a different perspective and different imperatives. I mean, markets reward the private sector for speed, scale, and innovation. While governments must manage risk and protect the public interest without stifling progress. But a challenge, and it’s been mentioned in some of the opening remarks, a challenge for policymakers in this context is that AI is moving much faster than policy cycles have traditionally moved, which easily then creates gaps between innovation and progress and opportunity, but necessary oversight, mitigation and management of risk.

But all sides in this conversation do share an essential common interest, and that is to ensure that the systems that are developing are trustworthy, because without public trust in the end, even the most powerful AI tools will struggle to gain broad adoption. So that means that occasionally, and it’s not always popular with everyone, but occasionally we should slow down. Occasionally we should actually pause. Pause, test, monitor, audit, share information, and take the time and invest in building confidence that these systems can work as intended and respect fundamental rights. So that’s sort of, I guess, the first point. another critical lesson involves international consistency and this is part of the reason why these sorts of summits are so important is to really facilitate these conversations among countries and among different jurisdictions because national priorities can vary quite widely and there’s of course fragmentation and compliance cost related risks and at the OECD really what we’ve been doing for six decades now across different policy areas is to try and reduce fragmentation and by achieving alignment around key principles, building shared evidence and facilitating the necessary conversations to develop a more coherent better coordinated approach moving forward and on AI I mean we’ve developed the OECD principles which were first adopted in 2019 and which are now adhered to by 50 countries around the world and that was really the first globally recognized baseline for trustworthy AI The OECD’s lifecycle definition of an I .I.

system has since shaped policy frameworks from the EU I .I. Act to U .S. executive orders. And we’ve had just earlier the meeting of the Global Partnership on I .I. co -chaired by Korea and Singapore. We’ve got the OECD I .I. Policy Observatory, which is sort of essentially the broad gamut of all of the different policy approaches around the world to provide countries and industries with data and evidence on what’s being done, facilitating peer learning, and trying to take some of the politics and the rhetoric out of it, but really looking at the facts. Now, looking ahead, and you sort of ask a question here about what to do about the risk. I mean, the most critical piece of frontier I .I.

safety infrastructure is coordinated. transparency and incident reporting. I mean, the Hiroshima I .I. Process Code of Conduct and its reporting framework launched at the I .I.’s Action Summit in Paris last year. You know, that’s a promising step, and we’ve got to continue to develop that. Since their publication, 25 organizations across nine countries have already submitted detailed reports on how they manage I .I. risks, offering for the first time a comparable view of developer practices across jurisdictions. The next stage is to strengthen information sharing on I .I. failures and near misses. The GPI I .I. Common Framework for Incident Reporting aims to help us collectively learn from mistakes before they scale globally, and over time, this could evolve into an international I .I.

Incident Response Center, coordinating alerts between governments and labs without exposing companies to commercial or legal penalties for reporting in good flight. Finally, we do need to scale access to practical safety tools. With global partners, the OECD recently launched an open call for open source safety and evaluation tools hosted in the OECD .ai catalog of tools and metrics to make a trustworthy AI easier to implement in practice. I mean, these are some initiatives to form the foundation of a more transparent, data -driven, and interoperable AI governance ecosystem, and

Eileen Donahoe

Excellent. Minister Teo, a number of questions for you, but let me start with the fact that Singapore occupies a very distinctive position in the global geostrategic landscape as a pro -innovation, advanced knowledge economy, with deep commercial and diplomatic ties to both the U .S. and China. Thank you. As the race to AGI intensifies and bilateral tensions mount, is there a role for Singapore and other middle powers to play in bridging the coordination gap to keep scientific and safety channels open? And also, what’s the most important step middle powers can take in the next 12 months to help establish a shared minimum understanding of frontier safety?

Josephine Teo

Well, thank you very much for that question. I think there is no running away from the fact that for smaller states, and that includes Singapore, the technology that our companies, our citizens are going to rely on do not originate from our shores. So they don’t necessarily come within our jurisdictions. We don’t always get to set the rules. Having said that, I do believe that we’re not without. Thank you. agency. It doesn’t mean that we take a step back and just let things happen to us. There are still things that we can do. One of the most important things I think as policymakers is for us to think about what it takes to translate what we know from science into policy.

And I wanted to just say why this is so important. In our case, as policymakers, the key questions will always be, are the policies that we make effective? And also, policies always come with trade -offs. With the question of effectiveness, there is always a need to understand what actually works, as opposed to what looks good on paper. With the question of trade -offs, it’s about understanding what we lose as a result of whatever safety aspects it is that we choose to put in place. And whether we can minimize them, can we mitigate them? Now, in areas where safety is the objective, we can’t just go with gut. We can’t just go with speculation. You take, for example, in my previous life, I was working on promoting Singapore’s Air Hub.

And we had to deal with a question of aviation safety. We were expanding our airport. It was going to carry many more passengers in and out of the country. But we are limited by number of runways. And in landscape Singapore, you can’t just click your finger and say, let’s build a new one. It’s a long runway. It’s very expensive anyway. Then there is the question of what do you do when you have these jumbo jets like A380s? Because each time an A380 hits the runway. It creates so much of a blast that you really need to create more distance between the A380 taking off and the next aircraft that is scheduled to take off.

Now, this is not a question that the transport minister can just decide on a whim. The air traffic management has to decide on its policy of how much distance is considered safe between landings or rather between takeoffs. And to answer this question, you really need to invest in the research. You need to invest in understanding the tests. So the science is one part of it. But between science to policy, you are actually going to need a lot of time. You are going to need a lot of tests. You are going to need a lot of simulations. you need to understand whether the distances that you decide are safe works well in a thunderstorm, a tropical thunderstorm.

Does it work just as well in a snowstorm? Well, we don’t have snow in Singapore. But you think about the airline that operates this. If each country that they fly into has a different safety distance, that creates some difficulty. So we therefore think that not only is there a need to invest in understanding the science, not only is there a need in understanding what testing looks like, what good testing looks like, there is also a need for us to think about what standards that will eventually be interoperable, what do they look like, which is why we think that international efforts, the collaboration that… that is being carried forward by the OECD through the Global Partnership on AI, the AI Safety Connect effort, and also ICI.

Where is Stuart now? Those kinds of efforts, you can’t do away without. At the outset, there is likely to be a bit of a fragmentation. And the trade -off with not having these conversations is that we are not even going to make advances in AI safety. And I don’t think that that’s a very good place for us to be in. It doesn’t give us the assurance that we can deliver to our citizens. And it does not create a foundation of trust that will eventually help us to push ahead with the use of this technology on a wider scale. So that’s how we are thinking about it, Aileen. Thank you.

Eileen Donahoe

So let me turn to Minister Gobind from Malaysia. and note that under your leadership and Malaysia’s 2025 ASEAN chairmanship, Malaysia succeeded in placing AI at the center of ASEAN’s agenda by establishing the ASEAN AI Safety Network. Malaysia is now finalizing its own AI National Action Plan, and Malaysia’s AI Governance Bill is expected in Parliament in 2026. So this dual -track approach of building national capacity while leading regional coordination represents a model of middle power agency that other countries are watching closely. So what lessons do you think other middle powers can draw from Malaysia’s experience? And on the ASEAN AI Safety Network, we have to note that operationalize and it will require sustained political will. technical capacity and resources.

So what concrete steps must ASEAN take in the next 12 to 18 months to ensure that this isn’t just aspirational?

Gobind Singh Deo

Online fraud, for example, scams, you have deepfakes today, you have huge concerns about certain vulnerable groups that are going to be impacted, children, older folk and so on and so forth. So this is something that stretches across the region. How do we deal with it in a coordinated way and ensure that the conversation doesn’t just stop with the government of the day, but it’s a conversation that expands over a period of time with clear policies that we can actually execute. The second layer that I think we need to think about is in the event there’s a need for execution. When we speak about risks in AI and we speak about how we’re going to govern these risks, we often talk about standards.

We often talk about regulation. We even speak about legislation at times for areas that pose higher risks. But ultimately, it really comes back down to you making sure you have an agency that can enforce it, because you can have the best standards. regulations and legislation but if there is no institution that’s really able to implement those standards to ensure that they are properly implemented and also to ensure that rules for failure to implement are enforced then those standards regulation and policies are really going to be just strong on paper but they’re not going to really have that impact that you need. So again, how do you build this mechanism across ASEAN where every country strengthens themselves domestically first and then moves across to the ASEAN member states and hopes to learn from their experiences so that we can together move ahead in this new world of AI and I think the threats that we anticipate in future.

Now the third part which is really important is also ensuring that whilst this goes on, you create those policies, you have institutions that enforce and the discussions persist at an ASEAN level. I think what is important is also to have that expertise looking at what comes next. We must make sure that our countries are prepared for the risks that are to come with the next generation technology. This is important because you don’t want a situation where new technology is adopted and there are risks that come with this new technology, you’re not prepared. I think that’s something we want to avoid and that’s the reason why I come back to where I started off. We really need to look at building institutions that have the expertise and of course are able to sustain as we go along and to build and deliver something that’s impactful.

Sorry, but that’s in short what we’re doing in Malaysia today.

Eileen Donahoe

Excellent. Thank you so much. Okay. Let me turn to Vice President Kim and talk about the World Bank, which has been at the forefront of digital public infrastructure, helping countries leapfrog legacy systems. We note that frontier AI systems, though, are arriving in the global south under very different conditions from previous waves of technology and governments are under pressure to deploy AI systems quickly. often using models that haven’t been adequately tested, let alone certified for their context, languages, or risk tolerances. So how can the World Bank help Global South countries move from being passive recipients of frontier AI to active shapers of safety and reliability requirements before the systems are deployed at scale?

Sangbu Kim

Thank you. In one word, definitely we need to make our clients well prepared from the scratch. When they design the AI systems, definitely they need to design the safety architecture within the system. That’s very, in general, that’s very correct. But real challenge is that… nobody can really expect a new type of new threat especially our some countries in a low capacity it is really hard to figure out what that will be so that’s the in order to tackle that type of irony and dilemma we need to very closely working with very developed economies company and government and very high end examples so that we can really well connect those good examples to the developing world so one partnership is one of the good examples we are helping our country for example some big tech company who is running some red teams so that you they are trying very hard to attack their system in advance by fully utilizing AI.

So through that type of practice and experiment, they can learn how to prevent the AI attack in the future, which is pretty much possible. So in this way, it is inevitable for our developing countries to keep track on the new trend and new innovation, even in this safety protection area. It is the only way. So I have to admit that this constraint. But think about this. Some anecdotal story in East Asia, in China and in Korea, there’s two models. Merchant who is selling two products. Number one is. sphere. And then they keep saying that this sphere is so strong so it can get through any kind of shield. So this is one vendor. The other vendor is selling shield.

And then they are saying that this shield is one of the most safe and strong shields. No sphere can get through this shield. This is exactly an ironical situation. If you think about AI, AI attack is the sphere. AI is so strong and smart and really capable so it can get through and hack any system with high -end intelligence and knowledge. But good news is that on the other hand, we also can build strong protective systems. by fully utilizing AI. So this is one good news, but the constraint is that we do not clearly know how AI can really evolve to fully protect those big attacks in the future. So in order to solve this type of ironical situation from the developing world point of view and from the World Bank point of view, this is the only way to very closely work and collaborate and learn from the advanced technology and advanced company and advanced country.

Eileen Donahoe

Thank you so much. Last but not least, Mr. Jan Tallinn, you occupy a very rare position in this landscape as a founding engineer of Skype, an early investor in DeepMind and Anthropic, and you’re also the co -founder of the Future of Life, which last October released a statement on superintelligence. calling for prohibition on superintelligent development until two conditions are met. Number one, broad scientific consensus that it can be done safely and controllably, and second, strong public buy -in. Let’s just ask the hard question. What would an effective prohibition look like in practice? How could that work?

Jann Tallinn

Thank you very much. So I think I’m kind of like a little bit different from the people on this panel. And that too, I guess. That I’m kind of, my main kind of threat vector about, my main worries about future are less about like how AI is being deployed and diffused and taken into practice. And I’m way more worried about what is happening in the labs, in the top AI companies. I’m not sure what the future is going to look like. because they are now in a cutthroat race to build something that is smarter than they are. They are in a cutthroat race to build superintelligence. And, like, I mean, we just saw yesterday the picture where, with a photo of it, Narendra Modi, Dario Amadei, and Sam Altman refused to link hands.

I mean, this is, like, indicative. We also saw both Dario and Demis Hassabis call for a slowdown in Davos last month. They just can’t do it alone. And I think there are, like, two reasons why it’s, like, an unfortunate situation. One is that the U .S. as a country is conflicted. They basically rely on AI for their economic and competitive power. So they are, like, very hesitant to, kind of, meddle with now. cutthroat situation in AI companies and the rest of the world really doesn’t understand how big danger they are now. So it’s part of the reason why we did the superintelligence statement is to create awareness that there is increasing political demand to do something about this situation.

We now have more than 130 ,000 signatures which is like many times more than we had done our original six months post letter had in 2023. So yeah, that’s like if there was enough pressure, I think clearly like the rest of the world is still kind of more powerful than the kind of leading AI countries. There are more people, there’s more economic power, etc. So if there was like enough pressure this could be solved. Like the way I put it is that it’s super hard to do like a $10 billion project. it’s impossible to do it if it’s illegal. So having these trillions flow into AI actually makes it easier to govern than harder.

Eileen Donahoe

So I’m tempted to follow up with a question about investors and their potential role in this. They are obviously playing a decisive role in shaping the incentives, but they’re largely absent from the governance conversation. So what would it take to bring investors meaningfully into the safety conversation?

Jann Tallinn

So, yeah, I think the answer is kind of simple. I don’t think investors play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them. They will now IPO soon. And if you are like an IPO market, there is… like, like, so level playing field, which means that like, if somebody’s not funding, somebody else will. So I don’t think investors, investors could have affected things, but like, five, 10 years ago.

Eileen Donahoe

Great. Okay, so since we’re running short on time, I’m going to ask one question, and ask you all to answer it, which is about the 12 month window. Oh, the very shortly, each shortly. Many in the AI safety community believe we have a narrow window, perhaps 12 to 24 months before frontier AI capabilities advance beyond our ability to evaluate and govern them. So what would you recommend is prioritized between now and we’re basically in the next year to two years, each of you to enhance safety? and security?

Josephine Teo

I think there are two, really. I think the AI safety research priorities need to be refreshed because the field has moved so quickly. The Singapore consensus identified a set, but as soon as they are published, we recognize that they will be out of date. So we need to refresh it. That’s why we’re going to have the second edition, you know, worked on. Hopefully in a few months. The second thing I think is that we can’t just keep thinking about frameworks, you know, and guidelines. At some point, we need to be able to introduce better testing tools. And until we are able to do so, the companies that are developing and deploying AI models, they also don’t have a very practical way of giving assurance.

So I’d like to see in the next 12 months some further advancements. In those two areas.

Mathias Cormann

I’ll be really quick I know there’s always a temptation in these sorts of conversations, what is the one thing that can sort of fix it all and the truth is there’s not one thing we’ve got to go as fast as we can to play catch up to a degree but we’ve also got to go as comprehensive and as deep as we can there’s just no alternative, there’s catch up to be played, we’ve got to put a real effort and it’s got to be right across the board and I don’t think that you can just say there’s the one thing that will make us all safe and it’s going to be okay.

Eileen Donahoe

Minister Gobind?

Gobind Singh Deo

I think as I said earlier, we need to start thinking how we can build structures and perhaps institutionalize this entire conversation about building security around AI and its governance in this regard, we have to understand that things are going to move very quickly and you’re going to see new technology develop very fast which brings new risks as well, so in that regard, you’ve got to build something that’s sustainable and I think in order to do that, institutionalizing it should be a priority.

Sangbu Kim

everyone is really rushing for ai system development ai solution development that means ai is currently ai safety measures currently under invested so i really like to urge all of us to think about this is not free you know things we need to spend some money to protect the system in advance from the scratch when you design the system so that means we should allocate some money to fully invest in in the

Eileen Donahoe

Jann Tallinn?

Jann Tallinn

so slow down we really need to slow down that the companies are asking for it and if we like instrumental to that would be basically transparency like more people should know what the leaders of ai companies know in order to basically understand how crucial the slowdown now is

Eileen Donahoe

okay great well i believe we have a little bit of a close coming and thank you all so much i wish we had had a day to talk about all of these issues. But thank you so much. Thank you very much.

Nicolas Miailhe

Thank you very much, Eileen, and this fantastic panel, excellencies, colleagues, friends. What we’ve heard today confirms something important. The coordination gap frontier in AI safety is real, and it is urgent. And as we’ve discussed today, it is closable. And before I hand over the floor to Osama Manzara to close off for a few minutes of remarks and reflection, I’d like to invite you all to the United Nations General Assembly next edition in New York, where we hope to organize the fourth edition of AI Safety Connect, and hopefully with many of the great policymakers and leaders we have heard from today, to carry forward that collective effort. Osama, the floor is yours.

Osama Manzar

Well, thank you very much. And we are one of those absentee co -organizer in this one. So, you know, because being a local, but I just want to I mean, apart from thanking each one of you who didn’t get up and, you know, go out of the room. And every one of you who gave all the safety remarks before usage of AI on behalf of 40 million people that we have reached out in the last 23 years. And billions of the other people whom we are going to work for. I want to suggest that the entire safety aspect of AI should be more from please save people from AI. Right. Because that’s the safety like it’s a car on the road.

You know, we have to save people before you teach people how to think. So we also have to keep a very, very strong thing. How do we save human intelligence from artificial intelligence? And how do we inbuilt in the safety guards and all the ethics and all the all the, you know, policy playbooks? Thank you very much. Thank you. Bye. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Nicolas Miailhe warned that the race toward artificial intelligence has shifted from a theoretical pursuit to a massive, financially‑driven endeavour, with billions – possibly trillions – of dollars being poured into frontier AI research while safety measures lag behind.”

The knowledge base notes massive compute investment driven by the race to be first and highlights the scale of AI investment worldwide, confirming the claim of a large financial push and accompanying safety concerns [S72] and [S73].

Additional Contextmedium

“The shift toward massive AI investment is driven by competition to be first, though efficiency improvements may reduce compute requirements.”

Additional nuance is provided: while investment is huge, future efficiency gains could lessen the need for such large spending, adding detail to the claim about billions-trillions of dollars [S72].

Confirmedhigh

“AI Safety Connect convenes semi‑annual global meetings at major AI summits and the UN General Assembly to accelerate safety discussions, build capacity and conduct closed‑door trust‑building exercises.”

The source states that the group meets at each AI summit and also convenes at the UN General Assembly, with a six-month cadence for global safety discussions [S22].

Confirmedhigh

“Stuart Russell argued that the governance challenge of ensuring only safe systems are built requires coordinated international action because the harms cross borders.”

The knowledge base explicitly mentions that the governance challenge needs global coordination because harms cross borders [S86].

Confirmedhigh

“The New Delhi week hosted Prime Minister Dick Schuh of the Netherlands for a special address on leadership in AI safety.”

The UN General Assembly agenda lists Prime Minister Dick Schoof of the Netherlands speaking, confirming the presence of a Dutch prime minister at the event [S79].

!
Correctionhigh

“The report misspells the Dutch prime minister’s name as “Dick Schuh”.”

The correct spelling, according to the official UN record, is “Dick Schoof” [S79].

External Sources (86)
S1
Hack the Digital Divides | IGF 2023 Day 0 Event #19 — Moderator – Peter A. Bruck:Can I ask the technical support to see if we can put the slides in? Is that good? Hello, good…
S2
S4
S5
Driving U.S. Innovation in Artificial Intelligence — 13. Stuart Appelbaum – President, Retail Wholesale and Department Store Union 14. Stuart Ingis – Chairman, Venable 15. …
S6
S7
Acknowledgements — In addition to coordinating simultaneous attacks on a single target, such UAVs could disperse to find and attack a la…
S8
Policymaker’s Guide to International AI Safety Coordination — -Gobind Singh Deo- Minister from Malaysia (leading Malaysia’s 2025 ASEAN chairmanship)
S10
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — -Mathias Cormann- Secretary General, OECD (Organisation for Economic Co-operation and Development) -Moderator- Role: Ev…
S11
Policymaker’s Guide to International AI Safety Coordination — -Mathias Cormann- Secretary General of the OECD
S12
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — 1030 words | 136 words per minute | Duration: 452 secondss India AI Impact Summit. And thank you to India for your lead…
S13
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Josephine Teo- Role/title not specified (represents Singapore)
S15
Policymaker’s Guide to International AI Safety Coordination — – Nicolas Miailhe- Eileen Donahoe- Jann Tallinn- Josephine Teo – Nicolas Miailhe- Mathias Cormann- Stuart Russell- Jose…
S16
Policymaker’s Guide to International AI Safety Coordination — Speakers:Nicolas Miailhe, Eileen Donahoe, Jann Tallinn, Josephine Teo Speakers:Nicolas Miailhe, Mathias Cormann, Stuart…
S17
Internet Society’s Collaborative Leadership Exchange (CLX) | IGF 2023 Day 0 Event #95 — Nicolas Fiumarelli:I am Nicolás Fiumarelli, 33 years old, and proud former youth ambassador. I was a youth ambassador as…
S18
AI for Social Good Using Technology to Create Real-World Impact — The World Bank’s Sangbu Kim presented concrete examples of how locally successful solutions can achieve global scale. He…
S19
Panel 5 – Ensuring Digital Resilience: Linking Submarine Cables to Broader Resilience Goals — – Nomsa Muswai Mwayenga- Sangbu Kim – Yongbo Tang- Sangbu Kim
S20
S21
Policymaker’s Guide to International AI Safety Coordination — Thank you, Stuart. So Dr. Eileen Donahoe is the founder and managing partner of Sympathico Ventures. She’s also the form…
S22
https://dig.watch/event/india-ai-impact-summit-2026/policymakers-guide-to-international-ai-safety-coordination — And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moder…
S23
The Declaration for the Future of the Internet: Principles to Action — A key figure tackling this connectivity challenge is Zeyna Bouharb, serving as head of international cooperation at Oger…
S24
World Economic Forum — TheCentre for the Fourth Industrial Revolutionis one of the Forum’s key centres of thematic work, with digital technolog…
S25
AI Meets Cybersecurity Trust Governance & Global Security — Building confidence and security in the use of ICTs | Artificial intelligence | Data governance Building trust through …
S26
Catalyzing Cyber: Stimulating Cybersecurity Market through Ecosystem Development — By focusing on education, industry collaboration, and capacity building, Malaysia aims to effectively tackle cyber threa…
S27
Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring — Despite typical concerns about private sector data control, there was consensus that partnerships with private sector en…
S28
Unlocking Trust and Safety to Preserve the Open Internet | IGF 2023 Open Forum #129 — Advocating for active engagement with civil society, McKay suggests that companies should proactively encourage dialogue…
S29
State of play of major global AI Governance processes — Its flexibility and adaptability are praised for bridging institutional, cultural, and regional practices. A cooperative…
S30
Upholding Human Rights in the Digital Age: Fostering a Multistakeholder Approach for Safeguarding Human Dignity and Freedom for All — Eileen Donahoe:It’s difficult. So many good questions and so many layers to them. I will start with the two points by ac…
S31
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S32
AI as critical infrastructure for continuity in public services — Inclusive participation of all stakeholders (government, civil society, technical community, private sector) breeds legi…
S33
Main Topic 1: Why the WSIS+20 Review Matters and How National and Regional IGFs Can Enhance Stakeholder Participation — Mark Carvell: Okay having covered a message about the IJF in particular which is one area of focus for the review of cou…
S34
Table of contents — + We increase the capacity of authorities and organisations performing public functions and providers of vital services …
S35
Opening of the session — Capacity Building and Regional Cooperation Referenced non-paper on role of regional organizations with examples from AS…
S36
Decoding Disinformation: Lessons from Case Studies — Against this background, a considerable number of national and regional legal frameworks, as well as private-led initiat…
S37
State of Play: Chips / DAVOS 2025 — Amandeep Singh Gill’s view on the limited impact of government incentives in shifting the manufacturing landscape is som…
S38
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Discussion point:Investment Risk and Market Dynamics
S39
INTRODUCTION — Rather than developing a framework of risks linked to general and thus cross-national assessments, it is t…
S40
Secure Finance Risk-Based AI Policy for the Banking Sector — Evidence:He describes how SEBI allowed algorithmic trading to develop from 2004-2010 without regulation, then introduced…
S41
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S42
AI Meets Cybersecurity Trust Governance & Global Security — Multi-stakeholder engagement including industry, technical community, and civil society is indispensable for managing sy…
S43
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Adisa emphasizes the importance of maintaining multi-stakeholder approaches similar to the Internet Governance Forum (IG…
S44
Policymaker’s Guide to International AI Safety Coordination — Cormann identifies coordinated transparency and incident reporting as the most critical safety infrastructure. He points…
S45
Policymaker’s Guide to International AI Safety Coordination — And we’ve had just earlier the meeting of the Global Partnership on AI co -chaired by Korea and Singapore. We’ve got the…
S46
India unveils AI incident reporting guidelines for critical infrastructure — India isdevelopingAI incident reporting guidelines for companies, developers, and public institutions to report AI-relat…
S47
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S48
AI That Empowers Safety Growth and Social Inclusion in Action — “investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are alig…
S49
Top investor urges boards to strengthen AI competency — Norway’s $1.7 trillion sovereign wealth fund, one of the world’s largest investors, iscallingfor improved AI governance …
S50
AI That Empowers Safety Growth and Social Inclusion in Action — Summary:The discussion revealed relatively low levels of direct disagreement, with most speakers aligned on fundamental …
S51
Agenda item 6 — Djibouti:Thank you, Chairman. At the outset, allow me also to thank you for the sincere words of recognition with which …
S52
Development of Cyber capacities in emerging economies | IGF 2023 Open Forum #6 — This Open Forum follows the dialogue already opened in the workshop at the WSIS Forum 2023 “Cybersecurity and cyber resi…
S53
Opening of the session — Kazakhstan: Thank you, Chair, for giving the floor. Mr. Chair, distinguished delegates, as it’s our first time taking th…
S54
Successes & challenges: cyber capacity building coordination | IGF 2023 — Another key point raised is the need to break down cyber capacity building into more specific categories. The analysis s…
S55
Policymaker’s Guide to International AI Safety Coordination — But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this …
S56
Policymaker’s Guide to International AI Safety Coordination — Russell argues that global coordination on AI safety is essential because the potential harms, whether psychological dam…
S57
Can National Security Keep Up with AI? / Davos 2025 — Both the US and China have significantly contributed to the AI ecosystem, with collaboration between scholars from both …
S58
WS #172 Regulating AI and Emerging Risks for Children’s Rights — Global cooperation and dialogue is needed to build common frameworks
S59
AI Meets Cybersecurity Trust Governance & Global Security — Building trust through transparency, incident reporting and standards
S60
Securing Access to the Internet and Protecting Core Internet Resources in Contexts of Conflict and Crises — – **Defining and protecting core Internet resources during conflicts**: The discussion centered on clarifying what const…
S61
WS #343 Revamping decision-making in digital governance — National and regional IGFs (177+ initiatives) represent successful organic multi-stakeholder model that should be recogn…
S62
Opening of the session — Capacity Building and Regional Cooperation Referenced non-paper on role of regional organizations with examples from AS…
S63
WS #173 Action Oriented Solutions to Strengthen the IGF — – **Strengthening National and Regional Initiatives (NRIs)**: Discussion of better integrating local and regional IGF in…
S64
Main Topic 1: Why the WSIS+20 Review Matters and How National and Regional IGFs Can Enhance Stakeholder Participation — Mark Carvell: Okay having covered a message about the IJF in particular which is one area of focus for the review of cou…
S65
UN General Assembly 66th Plenary Meeting – WSIS Plus 20 High-Level Review — At the same time, they have exposed and in some cases deepened inequalities and new divides both between and within coun…
S66
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Discussion point:Investment Risk and Market Dynamics
S67
DeepSeek AI shake-up affects Bitcoin and tech stocks — Bitcoin experienced a 6% drop on 27 January, as stock markets reacted to the debut of China’s open-source AI model, Deep…
S68
Sticking with Start-ups / DAVOS 2025 — Taneja describes the current investment landscape as a mix of caution towards overvalued companies from the COVID era an…
S69
INTRODUCTION — Rather than developing a framework of risks linked to general and thus cross-national assessments, it is t…
S70
AI researchers call for access to generative AI systems to ensure safety testing — Over 100 prominent AI researchershave signedan open letter urging generative AI companies to grant investigators access …
S71
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S72
Artificial General Intelligence and the Future of Responsible Governance — Massive compute investment is driven by the race to be first, though efficiency improvements may reduce requirements
S73
Summit Opening Session — This has to be treated as a shared responsibility, especially amid the massive investment and adoption of artificial int…
S74
WS #123 Responsible AI in Security Governance Risks and Innovation — Jingjie He: So I think the inclusive engagement across stakeholders is essential for the effective global governance of …
S75
Alignment Project to tackle safety risks of advanced AI systems — The UK’s Department for Science, Innovation and Technology (DSIT) hasannounced a new international research initiativeai…
S76
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — Ramadori criticizes the current approach of trying to fix AI problems after they manifest, arguing that this patching me…
S77
Towards a Safer South Launching the Global South AI Safety Research Network — Dr. Balaraman Ravindran from IIT Madras raised important questions about coordination, noting that multiple AI safety ne…
S78
UNSC meeting: Artificial intelligence, peace and security — France:Madam President, I thank the Secretary-General, as well as Mr. Clark and Yijing for their briefings. Artificial i…
S79
(Day 3) General Debate – General Assembly, 79th session: afternoon session — – Dick Schoof: Prime Minister of the Kingdom of the Netherlands – Winston Peters: Deputy Prime Minister, Minister for F…
S80
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existi…
S81
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Clara Neppel:Thank you for having me here, it’s a pleasure. So as it comes to the polls, the question is of course what …
S82
https://app.faicon.ai/ai-impact-summit-2026/keynote-by-dr-pramod-varma-co-founder-chief-architect-nfh-india-ai-impact-summit — Friday evening can be really hard. It’s tiring right after a long week. So thank you for having me here and I don’t want…
S83
International Conference on Cyber Conflict — Registration will be open in early March.
S84
Opening — Alain Berset, Secretary General of the Council of Europe, set the tone for the session by highlighting the unprecedented…
S85
Advancing Scientific AI with Safety Ethics and Responsibility — And I guess my point here is that we’re not going to be able to do that. The non -safeguarded access, like private acces…
S86
https://app.faicon.ai/ai-impact-summit-2026/policymakers-guide-to-international-ai-safety-coordination — But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Nicolas Miailhe
3 arguments149 words per minute812 words325 seconds
Argument 1
AI Safety Connect as a dedicated platform to convene stakeholders, build capacity, and accelerate safety discussions (Nicolas Miailhe)
EXPLANATION
Miailhe argues that a dedicated platform is needed to regularly bring together governments, industry, academia and civil society to address AI safety. By convening frequent meetings and capacity‑building activities, the platform can keep safety discussions moving at the pace of rapid AI development.
EVIDENCE
He describes AI Safety Connect convening at each AI summit-including events in Paris, India and upcoming Switzerland-holding a global convening every six months, and conducting capacity-building and trust-building exercises behind closed doors, thereby providing a regular venue for stakeholders to meet and discuss safety [10-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI Safety Connect’s role in convening at each AI summit and holding global meetings every six months is described in the policy guide, confirming its platform purpose [S2][S3].
MAJOR DISCUSSION POINT
Platform for stakeholder convening
AGREED WITH
Sangbu Kim
Argument 2
AI Safety Connect showcases governance coordination mechanisms, tools, and solutions to support transparency (Nicolas Miailhe)
EXPLANATION
Miailhe states that the initiative not only convenes actors but also demonstrates concrete governance tools and solutions that can be adopted worldwide. Showcasing these mechanisms helps build transparency and shared best practices across the AI community.
EVIDENCE
He notes that AI Safety Connect has been connected to showcase Concrete’s governance coordination mechanisms, tools, and solutions, highlighting its role in providing transparent governance resources [9-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The guide notes that AI Safety Connect showcases Concrete’s governance coordination mechanisms and tools, supporting transparency [S2].
MAJOR DISCUSSION POINT
Demonstrating governance tools
Argument 3
Engaging industry through forums and workshops is essential to align commercial incentives with safety goals (Nicolas Miailhe)
EXPLANATION
Miailhe emphasizes that collaboration with industry and academia is crucial for aligning market incentives with safety objectives. By involving private sector leaders in workshops and panels, the gap between rapid innovation and safety oversight can be narrowed.
EVIDENCE
He thanks co-hosts, sponsors and mentions that AI Safety Connect engages with industry and academia of India and abroad, hosting panels, solution demonstrations and closed-door workshops with senior industry leaders [19-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sponsors and industry partners are thanked for participation in AI Safety Connect events, highlighting industry engagement through panels and workshops [S2].
MAJOR DISCUSSION POINT
Industry engagement
S
Stuart Russell
1 argument119 words per minute250 words125 seconds
Argument 1
AI safety requires both technical solutions and coordinated governance; global coordination is essential to prevent cross‑border harms (Stuart Russell)
EXPLANATION
Russell points out that building safe AI systems is both a technical and a governance challenge. Because AI harms can cross national borders, coordinated international governance is necessary to ensure only safe systems are built and deployed.
EVIDENCE
He explains that ensuring AI systems operate safely is partly a technical challenge and partly a governance challenge, and that global coordination is essential because harms such as psychological damage or loss of human control cross borders [39-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Russell’s emphasis on the need for global coordination to address cross-border AI harms is recorded in the coordination guide [S3].
MAJOR DISCUSSION POINT
Need for technical and governance coordination
AGREED WITH
Nicolas Miailhe, Mathias Cormann, Eileen Donahoe
E
Eileen Donahoe
2 arguments122 words per minute1101 words539 seconds
Argument 1
Policymakers, especially from middle‑power and global‑majority states, must fill the governance gap and shape international AI practices (Eileen Donahoe)
EXPLANATION
Donahoe argues that middle‑power and global‑majority states have a crucial role in closing the fragmented AI governance landscape. Their pooled resources and normative influence can steer international AI practices toward safety.
EVIDENCE
She notes that while discourse has focused on AI superpowers, there is an urgent need for deeper international diplomacy and that middle powers can shape global AI practices through pooled resources, market leverage and regulatory innovation [56-66].
MAJOR DISCUSSION POINT
Middle‑power role in governance
AGREED WITH
Josephine Teo, Gobind Singh Deo
Argument 2
Middle powers can leverage pooled resources, market influence, and normative leadership to drive global AI safety (Eileen Donahoe)
EXPLANATION
Donahoe highlights that middle powers, by combining their resources and market power, can exert normative leadership that influences global AI safety standards. This collective agency can move AI governance from rhetoric to real‑world impact.
EVIDENCE
She explicitly states that through pooled resources, market leverage, normative influence and regulatory innovation, middle powers can shape the direction of global AI practices and safety [62-64].
MAJOR DISCUSSION POINT
Leveraging middle‑power resources
M
Mathias Cormann
6 arguments145 words per minute864 words356 seconds
Argument 1
Trust is built through inclusive, evidence‑based processes that bring governments, industry, and civil society together (Mathias Cormann)
EXPLANATION
Cormann stresses that trust in AI systems emerges when all relevant actors participate based on objective evidence. Inclusive dialogues that incorporate governments, companies and civil society are essential for trustworthy AI.
EVIDENCE
He explains that trust is built through inclusion and objective evidence, and that bringing together governments, companies, civil society and technical experts is necessary for building confidence in AI systems [77-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cormann stresses that trust is built via inclusive, evidence-based multi-stakeholder processes, as highlighted in the OECD briefing [S2][S12].
MAJOR DISCUSSION POINT
Inclusive trust‑building
AGREED WITH
Nicolas Miailhe, Stuart Russell, Eileen Donahoe
Argument 2
Inclusion of all relevant actors—governments, industry, civil society, and technical experts—is critical for trustworthy AI outcomes (Mathias Cormann)
EXPLANATION
Cormann reiterates that a multi‑stakeholder approach, where each sector contributes its perspective, is vital to ensure AI systems are trustworthy and widely accepted.
EVIDENCE
He notes that bringing together all relevant actors-governments, companies, civil society, technical experts-is what is needed to build trust and achieve trustworthy AI outcomes [77-86].
MAJOR DISCUSSION POINT
Multi‑stakeholder inclusion
Argument 3
Coordinated transparency and a global incident‑reporting framework are the most critical frontier‑AI safety infrastructures; they can evolve into an international incident‑response centre (Mathias Cormann)
EXPLANATION
Cormann identifies coordinated transparency and incident reporting as the key infrastructure needed for frontier AI safety. He envisions this framework maturing into an international incident‑response centre that shares alerts without penalising reporters.
EVIDENCE
He states that the most critical piece of frontier AI safety infrastructure is coordinated transparency and incident reporting, referencing the Hiroshima AI Process Code of Conduct and the GPI AI Common Framework for Incident Reporting, which could evolve into an international incident-response centre [91-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The most critical frontier-AI safety infrastructure identified is coordinated transparency and incident reporting, with references to the Hiroshima AI Process Code of Conduct and GPI framework [S2][S25].
MAJOR DISCUSSION POINT
Incident reporting infrastructure
AGREED WITH
Jann Tallinn
DISAGREED WITH
Jann Tallinn
Argument 4
Open‑source safety and evaluation tools, catalogued by the OECD, help make trustworthy AI implementation practical (Mathias Cormann)
EXPLANATION
Cormann points out that open‑source safety and evaluation tools, hosted in the OECD AI catalog, lower barriers for developers to implement trustworthy AI, making safety more actionable.
EVIDENCE
He mentions that the OECD recently launched an open call for open-source safety and evaluation tools hosted in the OECD.ai catalog, facilitating practical implementation of trustworthy AI [98-99].
MAJOR DISCUSSION POINT
Open‑source safety tools
AGREED WITH
Josephine Teo
Argument 5
Pursue a comprehensive, simultaneous catch‑up across technical, regulatory, and institutional dimensions rather than seeking a single “silver‑bullet” solution (Mathias Cormann)
EXPLANATION
Cormann warns against looking for a single fix, urging a broad, simultaneous effort across technical, regulatory and institutional fronts to catch up with rapid AI advances.
EVIDENCE
He says there is no one-size-all solution; we must play catch-up quickly and comprehensively across the board, emphasizing depth and breadth of effort [251-254].
MAJOR DISCUSSION POINT
Comprehensive catch‑up
AGREED WITH
Gobind Singh Deo, Nicolas Miailhe
DISAGREED WITH
Jann Tallinn
Argument 6
The private sector’s drive for speed, scale, and innovation creates gaps with slower policy cycles, highlighting the need for regulatory alignment (Mathias Cormann)
EXPLANATION
Cormann observes that the private sector’s rapid innovation outpaces policy making, creating oversight gaps that require better regulatory alignment to ensure safety.
EVIDENCE
He notes that markets reward speed, scale and innovation while governments must manage risk, and that AI is moving much faster than policy cycles, creating gaps between innovation and oversight [80-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cormann points out the tension between private-sector speed and slower policy cycles, underscoring the need for regulatory alignment [S2].
MAJOR DISCUSSION POINT
Speed gap between industry and policy
J
Josephine Teo
3 arguments143 words per minute889 words371 seconds
Argument 1
Effective policy must translate scientific knowledge into actionable standards and consider trade‑offs to maintain public confidence (Josephine Teo)
EXPLANATION
Teo stresses that policymakers need to turn scientific insights into effective, implementable standards while balancing trade‑offs. Policies must be evidence‑based and assess what is lost or gained by safety measures.
EVIDENCE
She explains that policymakers must consider whether policies are effective, understand what works versus what looks good on paper, and weigh trade-offs to minimize losses, using her aviation safety experience as an illustration of the need for rigorous testing and standards [103-118].
MAJOR DISCUSSION POINT
Science‑to‑policy translation
Argument 2
Singapore can act as a bridge between major AI producers, translating scientific insights into effective, interoperable policies (Josephine Teo)
EXPLANATION
Teo positions Singapore as a middle‑power that can mediate between AI‑producing nations, converting scientific findings into interoperable regulatory frameworks that other countries can adopt.
EVIDENCE
She notes Singapore’s distinctive pro-innovation position and its role in translating scientific knowledge into policy, emphasizing its ability to bridge major AI producers and create interoperable standards [103-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Singapore’s distinctive pro-innovation, diplomatic position enabling it to bridge major AI producers and translate science into policy is noted in the coordination guide [S3].
MAJOR DISCUSSION POINT
Singapore as bridging middle power
AGREED WITH
Eileen Donahoe, Gobind Singh Deo
Argument 3
Refresh AI safety research priorities and develop robust testing tools to give developers practical assurance (Josephine Teo)
EXPLANATION
Teo calls for updating AI safety research agendas and creating better testing tools, as existing frameworks quickly become outdated. Practical testing tools will give developers concrete assurance of safety.
EVIDENCE
She states that AI safety research priorities need refreshing because they become outdated quickly, and that better testing tools are required to provide developers with practical assurance [240-249].
MAJOR DISCUSSION POINT
Updating research and testing tools
G
Gobind Singh Deo
3 arguments174 words per minute535 words183 seconds
Argument 1
Regional enforcement bodies and networks (e.g., ASEAN AI Safety Network) are needed to turn standards into practice (Gobind Singh Deo)
EXPLANATION
Deo argues that without agencies capable of enforcing standards, regulations remain paper‑only. Regional networks like the ASEAN AI Safety Network can provide the enforcement mechanism needed to operationalise standards.
EVIDENCE
He emphasizes that standards, regulations and legislation require an agency that can enforce them, and cites the ASEAN AI Safety Network as a regional effort that needs sustained political will, technical capacity and resources to be operationalised [158-166].
MAJOR DISCUSSION POINT
Need for enforcement agencies
Argument 2
ASEAN’s AI Safety Network exemplifies a regional approach that combines national capacity building with collective coordination (Gobind Singh Deo)
EXPLANATION
Deo highlights the ASEAN AI Safety Network as a model where individual countries build capacity while simultaneously coordinating region‑wide AI safety efforts, offering a template for other middle powers.
EVIDENCE
He notes that under Malaysia’s ASEAN chairmanship, AI was placed at the centre of ASEAN’s agenda through the ASEAN AI Safety Network, illustrating a dual-track approach of national capacity building and regional coordination [158-166].
MAJOR DISCUSSION POINT
Regional coordination model
AGREED WITH
Eileen Donahoe, Josephine Teo
Argument 3
Institutionalize AI‑safety governance structures to ensure sustainability and enforceability of standards (Gobind Singh Deo)
EXPLANATION
Deo calls for the creation of permanent institutions that can keep pace with rapid AI advances, ensuring that governance structures are sustainable and that standards are consistently enforced.
EVIDENCE
He states that we need to build structures and perhaps institutionalize the conversation about AI security and governance to make it sustainable and enforceable [253-254].
MAJOR DISCUSSION POINT
Institutionalization of AI governance
AGREED WITH
Mathias Cormann, Nicolas Miailhe
S
Sangbu Kim
3 arguments112 words per minute525 words280 seconds
Argument 1
Capacity‑building partnerships with advanced economies and firms help low‑capacity countries design safety‑by‑design AI systems (Sangbu Kim)
EXPLANATION
Kim argues that low‑capacity nations need close collaboration with advanced economies and leading tech firms to embed safety architecture from the design stage, enabling them to keep pace with frontier AI.
EVIDENCE
He describes partnerships with big-tech companies that run red-team exercises, allowing developing countries to learn how to prevent AI attacks and to design safety-by-design systems from the outset [178-183].
MAJOR DISCUSSION POINT
Partnerships for safety‑by‑design
AGREED WITH
Nicolas Miailhe
Argument 2
Partnerships with leading AI firms for red‑team exercises enable developing countries to learn how to detect and mitigate AI attacks (Sangbu Kim)
EXPLANATION
Kim points out that red‑team collaborations with advanced firms provide practical experience for emerging economies to identify and counter AI‑driven threats.
EVIDENCE
He mentions that a big-tech company is running red-team exercises, helping countries understand how to attack and defend AI systems before deployment [181-183].
MAJOR DISCUSSION POINT
Red‑team capacity building
Argument 3
Allocate dedicated funding to embed safety architecture from the design stage and to support capacity‑building collaborations (Sangbu Kim)
EXPLANATION
Kim stresses that safety measures require financial investment from the beginning of AI system design, and that funding should also support ongoing capacity‑building partnerships with advanced partners.
EVIDENCE
He urges that AI safety measures are under-invested and calls for allocating money to embed safety architecture from the design stage and to fund capacity-building collaborations [178-185].
MAJOR DISCUSSION POINT
Funding for safety‑by‑design
J
Jann Tallinn
4 arguments143 words per minute517 words216 seconds
Argument 1
A slowdown of frontier AI development, coupled with greater transparency about lab activities, is required to manage existential risk (Jann Tallinn)
EXPLANATION
Tallinn argues that the race to superintelligence poses existential danger, and that slowing development while increasing transparency about lab capabilities is essential to mitigate that risk.
EVIDENCE
He notes the need for a slowdown, cites political pressure, signatures on a superintelligence statement, and stresses that transparency about what AI labs are doing is crucial for managing risk [208-227].
MAJOR DISCUSSION POINT
Slowdown and transparency
AGREED WITH
Mathias Cormann
DISAGREED WITH
Mathias Cormann
Argument 2
International pressure and public advocacy can compel leading AI nations to adopt slowdown measures (Jann Tallinn)
EXPLANATION
Tallinn highlights that growing public and political pressure, demonstrated by signatures and statements, can force AI‑leading countries to consider slowing down development.
EVIDENCE
He references the superintelligence statement gaining over 130,000 signatures and the need for political demand to create awareness and pressure on AI leaders [219-227].
MAJOR DISCUSSION POINT
Public pressure for slowdown
Argument 3
Implement a deliberate slowdown of frontier AI development, supported by greater transparency about lab capabilities (Jann Tallinn)
EXPLANATION
Tallinn reiterates that a purposeful deceleration of AI progress, combined with open information about lab activities, is the most viable path to keep existential risk in check.
EVIDENCE
He repeats the call for slowdown and transparency, emphasizing that without sufficient pressure the world cannot effectively govern superintelligence development [219-227].
MAJOR DISCUSSION POINT
Deliberate slowdown
AGREED WITH
Mathias Cormann
Argument 4
Historically, investors could shape AI incentives, but today leading AI firms are beyond typical investor influence; IPO dynamics diminish private leverage (Jann Tallinn)
EXPLANATION
Tallinn observes that while investors once could affect AI development, the current scale and IPO‑driven nature of leading AI companies limit investors’ ability to influence safety decisions.
EVIDENCE
He states that investors no longer play a significant role because leading AI companies are moving towards IPOs, creating a level playing field where lack of funding by one investor is quickly filled by another, reducing investor influence [231-235].
MAJOR DISCUSSION POINT
Diminished investor influence
DISAGREED WITH
Eileen Donahoe
O
Osama Manzar
1 argument72 words per minute193 words159 seconds
Argument 1
The overarching goal of AI safety is to protect human intelligence and wellbeing from harmful AI outcomes (Osama Manzar)
EXPLANATION
Manzar frames AI safety as a mission to safeguard human cognition and overall wellbeing, likening it to protecting people from a dangerous vehicle before teaching them how to drive.
EVIDENCE
He states that the safety aspect of AI should focus on saving people, asks how to save human intelligence from artificial intelligence, and calls for embedding safety guards and ethics into AI systems [272-277].
MAJOR DISCUSSION POINT
Protecting human intelligence
Agreements
Agreement Points
A coordinated global governance framework with multi‑stakeholder inclusion is essential for AI safety
Speakers: Nicolas Miailhe, Stuart Russell, Mathias Cormann, Eileen Donahoe
AI Safety Connect as a dedicated platform to convene stakeholders, build capacity, and accelerate safety discussions (Nicolas Miailhe) AI safety requires both technical solutions and coordinated governance; global coordination is essential to prevent cross‑border harms (Stuart Russell) Trust is built through inclusive, evidence‑based processes that bring governments, industry, and civil society together (Mathias Cormann) Policymakers, especially from middle‑power and global‑majority states, must fill the governance gap and shape international AI practices (Eileen Donahoe)
All speakers stress that AI safety cannot be achieved without coordinated global governance that brings together governments, industry, academia and civil society on a regular basis [10-15][44-46][77-86][56-66].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with calls for multi-stakeholder engagement in AI risk management highlighted by the OECD, UNESCO and other bodies, and mirrors the IGF-style approach advocated for digital technology governance [S41][S42][S43][S45].
Transparency and coordinated incident‑reporting are critical frontier‑AI safety infrastructure
Speakers: Mathias Cormann, Jann Tallinn
Coordinated transparency and a global incident‑reporting framework are the most critical frontier‑AI safety infrastructures; they can evolve into an international incident‑response centre (Mathias Cormann) A slowdown of frontier AI development, coupled with greater transparency about lab activities, is required to manage existential risk (Jann Tallinn)
Both speakers argue that coordinated transparency, incident reporting and broader openness about lab activities are essential to manage frontier AI risks [91-96][256].
POLICY CONTEXT (KNOWLEDGE BASE)
Policymaker guides identify coordinated transparency and incident reporting as the most critical safety infrastructure, citing the Hiroshima AI Process Code of Conduct and emerging national guidelines such as India’s AI incident-reporting framework [S44][S46].
Capacity‑building and partnership programmes are needed to help low‑capacity countries embed safety‑by‑design
Speakers: Nicolas Miailhe, Sangbu Kim
AI Safety Connect as a dedicated platform to convene stakeholders, build capacity, and accelerate safety discussions (Nicolas Miailhe) Capacity‑building partnerships with advanced economies and firms help low‑capacity countries design safety‑by‑design AI systems (Sangbu Kim)
Nicolas highlights AI Safety Connect’s capacity-building activities, while Sangbu describes partnerships with advanced economies and red-team exercises to enable low-capacity nations to embed safety from the design stage [15-16][178-183].
POLICY CONTEXT (KNOWLEDGE BASE)
International discussions emphasize capacity-building for emerging economies in AI and cybersecurity, with initiatives like the OECD AI Policy Observatory and UN-led partnerships targeting low-capacity states, echoing earlier cyber-capacity efforts [S41][S45][S52][S54].
Occasional pauses or a deliberate slowdown of AI development are necessary to ensure safety
Speakers: Mathias Cormann, Jann Tallinn
Occasionally we should slow down. Occasionally we should actually pause. Pause, test, monitor, audit, share information, and take the time and invest in building confidence that these systems can work as intended and respect fundamental rights (Mathias Cormann) Implement a deliberate slowdown of frontier AI development, supported by greater transparency about lab capabilities (Jann Tallinn)
Mathias suggests occasional pauses to test and audit systems, and Jann calls for a deliberate slowdown of frontier AI development [84-86][256].
Institutionalising AI‑safety governance and creating sustainable structures is required for long‑term impact
Speakers: Gobind Singh Deo, Mathias Cormann, Nicolas Miailhe
Institutionalize AI‑safety governance structures to ensure sustainability and enforceability of standards (Gobind Singh Deo) Pursue a comprehensive, simultaneous catch‑up across technical, regulatory, and institutional dimensions rather than seeking a single “silver‑bullet” solution (Mathias Cormann) AI Safety Connect has been founded to encourage global majority engagement into frontier AI safety (Nicolas Miailhe)
Gobind calls for institutionalising AI-safety governance, Mathias stresses a comprehensive catch-up across all dimensions, and Nicolas proposes AI Safety Connect as a permanent convening platform [253-254][251-254][9-15].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for durable, institutional mechanisms is reflected in multi-stakeholder policy frameworks and the establishment of bodies such as the Global Partnership on AI and the OECD AI Policy Observatory, which aim to embed safety governance over time [S41][S45][S42].
Middle powers and regional actors can leverage pooled resources and normative influence to shape global AI safety
Speakers: Eileen Donahoe, Josephine Teo, Gobind Singh Deo
Policymakers, especially from middle‑power and global‑majority states, must fill the governance gap and shape international AI practices (Eileen Donahoe) Singapore can act as a bridge between major AI producers, translating scientific insights into effective, interoperable policies (Josephine Teo) ASEAN’s AI Safety Network exemplifies a regional approach that combines national capacity building with collective coordination (Gobind Singh Deo)
Eileen, Josephine and Gobind all emphasize that middle-power and regional actors can use pooled resources, market leverage and normative leadership to drive global AI safety [56-66][103-112][158-166].
POLICY CONTEXT (KNOWLEDGE BASE)
Regional co-chairs (e.g., Korea and Singapore) of the Global Partnership on AI illustrate how middle powers can pool resources and set norms, a model discussed in recent policy coordination meetings [S45].
Updating AI safety research priorities and providing practical testing tools are needed for effective implementation
Speakers: Josephine Teo, Mathias Cormann
Refresh AI safety research priorities and develop robust testing tools (Josephine Teo) Open‑source safety and evaluation tools, catalogued by the OECD, help make trustworthy AI implementation practical (Mathias Cormann)
Josephine calls for refreshed research agendas and better testing tools, while Mathias points to open-source safety tools in the OECD catalog as a way to make trustworthy AI practical [240-249][98-99].
Similar Viewpoints
Both emphasize the necessity of a dedicated, recurring platform and global coordination to align technical development with safety governance [10-15][44-46].
Speakers: Nicolas Miailhe, Stuart Russell
AI Safety Connect as a dedicated platform to convene stakeholders, build capacity, and accelerate safety discussions (Nicolas Miailhe) AI safety requires both technical solutions and coordinated governance; global coordination is essential to prevent cross‑border harms (Stuart Russell)
Both agree that a slowdown or pause, combined with increased transparency, is essential to mitigate AI risks [84-86][256].
Speakers: Mathias Cormann, Jann Tallinn
Occasionally we should slow down… pause, test, monitor… (Mathias Cormann) A slowdown of frontier AI development, coupled with greater transparency about lab activities, is required to manage existential risk (Jann Tallinn)
Both highlight the pivotal role of middle‑power and regional actors in building sustainable AI governance structures [56-66][253-254].
Speakers: Eileen Donahoe, Gobind Singh Deo
Policymakers, especially from middle‑power and global‑majority states, must fill the governance gap and shape international AI practices (Eileen Donahoe) Institutionalize AI‑safety governance structures to ensure sustainability and enforceability of standards (Gobind Singh Deo)
Both stress the need for capacity‑building and practical tools to translate scientific knowledge into effective safety measures [240-249][178-183].
Speakers: Josephine Teo, Sangbu Kim
Refresh AI safety research priorities and develop robust testing tools (Josephine Teo) Capacity‑building partnerships with advanced economies and firms help low‑capacity countries design safety‑by‑design AI systems (Sangbu Kim)
Both call for institutional, systematic approaches rather than ad‑hoc fixes to AI governance [253-254][251-254].
Speakers: Gobind Singh Deo, Mathias Cormann
Institutionalize AI‑safety governance structures to ensure sustainability and enforceability of standards (Gobind Singh Deo) Pursue a comprehensive, simultaneous catch‑up across technical, regulatory, and institutional dimensions rather than seeking a single “silver‑bullet” solution (Mathias Cormann)
Unexpected Consensus
Agreement on the need for a deliberate slowdown/pausing of AI development despite coming from different professional backgrounds (technical governance vs activist/ethical perspective)
Speakers: Mathias Cormann, Jann Tallinn
Occasionally we should slow down… pause, test, monitor… (Mathias Cormann) Implement a deliberate slowdown of frontier AI development, supported by greater transparency about lab capabilities (Jann Tallinn)
While Mathias frames the pause as a technical governance measure and Jann frames it as an ethical/existential safeguard, both converge on the necessity of slowing AI progress to manage risk [84-86][256].
Overall Assessment

The panel shows strong convergence on several core themes: the need for coordinated global governance with multi‑stakeholder inclusion, transparency and incident‑reporting mechanisms, capacity‑building partnerships, periodic pauses or slowdowns in development, institutionalised governance structures, and an active role for middle‑power and regional actors. Additionally, there is agreement on updating research priorities and providing practical testing tools.

High consensus across most speakers, indicating a shared understanding that coordinated, inclusive, and transparent mechanisms—supported by capacity‑building and institutionalisation—are essential for safe AI development. This broad alignment suggests that concrete policy initiatives (e.g., incident‑reporting frameworks, middle‑power coalitions, and capacity‑building programs) have a strong foundation for international adoption.

Differences
Different Viewpoints
Role of investors in AI safety governance
Speakers: Jann Tallinn, Eileen Donahoe
Historically, investors could shape AI incentives, but today leading AI firms are beyond typical investor influence; IPO dynamics diminish private leverage (Jann Tallinn) Policymakers, especially from middle‑power and global‑majority states, must fill the governance gap and shape international AI practices; investors are largely absent from the governance conversation and need to be brought in meaningfully (Eileen Donahoe)
Jann argues that investors no longer have meaningful influence over AI development because leading firms are moving toward IPOs and can replace any missing funding, whereas Eileen stresses that investors remain a decisive lever and should be integrated into the safety conversation to shape incentives [231-235][228-230].
POLICY CONTEXT (KNOWLEDGE BASE)
Investor-focused guidance urges board-level AI risk responsibility and alignment of incentives, as highlighted by the Norway sovereign wealth fund and broader responsible-innovation frameworks [S48][S49].
Primary safety measure – slowdown of AI development vs building coordinated incident‑reporting infrastructure
Speakers: Jann Tallinn, Mathias Cormann
A slowdown of frontier AI development, coupled with greater transparency about lab activities, is required to manage existential risk (Jann Tallinn) Coordinated transparency and a global incident‑reporting framework are the most critical frontier‑AI safety infrastructures; they can evolve into an international incident‑response centre (Mathias Cormann) Pursue a comprehensive, simultaneous catch‑up across technical, regulatory, and institutional dimensions rather than seeking a single “silver‑bullet” solution (Mathias Cormann)
Jann calls for a deliberate deceleration of AI progress together with lab transparency as the key mitigation, while Mathias emphasizes building a coordinated transparency and incident-reporting system (and a broader catch-up effort) without explicitly advocating a slowdown [208-227][91-96][251-254].
POLICY CONTEXT (KNOWLEDGE BASE)
While some stakeholders prioritize a development pause, policy documents stress incident-reporting infrastructure as the immediate safety lever, indicating a tension between speed-control and transparency measures [S44][S47].
Whether a single policy lever (slowdown) can solve AI safety versus a multi‑dimensional comprehensive approach
Speakers: Mathias Cormann, Jann Tallinn
Pursue a comprehensive, simultaneous catch‑up across technical, regulatory, and institutional dimensions rather than seeking a single “silver‑bullet” solution (Mathias Cormann) A slowdown of frontier AI development, coupled with greater transparency about lab activities, is required to manage existential risk (Jann Tallinn)
Mathias rejects the notion that a single measure can address AI safety, urging a broad, simultaneous effort, whereas Jann proposes slowdown (combined with transparency) as the pivotal single lever, reflecting a clash over the strategic focus [251-254][208-227].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-dimensional governance approaches are advocated in interdisciplinary AI risk literature, arguing that a single lever is insufficient, echoing the multi-stakeholder, systemic-risk perspective [S42][S47].
Unexpected Differences
Investors’ relevance to AI safety governance
Speakers: Jann Tallinn, Eileen Donahoe
Historically, investors could shape AI incentives, but today leading AI firms are beyond typical investor influence; IPO dynamics diminish private leverage (Jann Tallinn) Investors are largely absent from the governance conversation; what would it take to bring investors meaningfully into the safety conversation? (Eileen Donahoe)
While many expect private capital to be a lever for safety, Jann asserts that the scale of current AI firms renders investors ineffective, directly contradicting Eileen’s call for their inclusion-a surprising clash given the usual emphasis on multi-stakeholder involvement [231-235][228-230].
POLICY CONTEXT (KNOWLEDGE BASE)
Investor engagement is seen as a critical governance component, with calls for AI competency at board level and alignment with human-rights risk mitigation [S48][S49].
Overall Assessment

The panel shows broad consensus on the need for stronger coordination, inclusive governance, and capacity building, but diverges on the primary levers: Jann pushes for a deliberate slowdown and greater lab transparency, whereas others (Mathias, Stuart, Eileen) prioritize building coordinated incident‑reporting systems, multi‑stakeholder trust processes, and middle‑power diplomatic engagement. A notable unexpected split concerns the role of investors, with Jann deeming them irrelevant and Eileen urging their involvement.

Moderate to high. While participants share the overarching goal of safer AI, they differ sharply on strategic priorities (slowdown vs infrastructure) and on who should drive change (investors vs governments/middle powers). These divergences could impede the formulation of a unified policy agenda unless reconciled through compromise mechanisms.

Partial Agreements
All three agree that stronger coordination and multi‑stakeholder engagement are needed to achieve safe AI, but differ on the primary actors and mechanisms – Stuart stresses worldwide coordination, Eileen highlights middle‑power pooling, and Mathias focuses on inclusive, evidence‑based processes [44-46][56-66][77-86].
Speakers: Stuart Russell, Eileen Donahoe, Mathias Cormann
Global coordination is essential because AI harms cross borders (Stuart Russell) Middle powers must fill the fragmented governance gap and shape international AI practices (Eileen Donahoe) Trust is built through inclusion and objective evidence; bringing together governments, industry, civil society, and technical experts is essential (Mathias Cormann)
Both aim to ensure AI standards become effective on the ground; Gobind stresses the creation of enforcement agencies at the regional level, while Josephine emphasizes translating science into national policies and testing tools, reflecting different implementation pathways [162-166][110-118].
Speakers: Gobind Singh Deo, Josephine Teo
Regional enforcement bodies and networks (e.g., ASEAN AI Safety Network) are needed to turn standards into practice (Gobind Singh Deo) Effective policy must translate scientific knowledge into actionable standards and consider trade‑offs to maintain public confidence (Josephine Teo)
Both seek trustworthy AI; Mathias focuses on inclusive processes to build trust, whereas Josephine calls for updated research agendas and concrete testing tools, showing complementary routes to the same goal [77-86][246-249].
Speakers: Mathias Cormann, Josephine Teo
Trust is built through inclusion and objective evidence; multi‑stakeholder dialogue is essential (Mathias Cormann) Refresh AI safety research priorities and develop robust testing tools to give developers practical assurance (Josephine Teo)
Takeaways
Key takeaways
AI safety is lagging behind rapid AI development; coordinated governance and technical solutions are both essential. AI Safety Connect serves as a convening platform to accelerate safety discussions, build capacity, and showcase governance tools. Global coordination—especially involving middle‑power and global‑majority states—is critical to close the governance gap. Trust is built through inclusive, evidence‑based processes that bring together governments, industry, civil society, and technical experts. The OECD emphasizes transparent incident reporting and a potential international incident‑response centre as core frontier‑AI safety infrastructure. Open‑source safety and evaluation tools, catalogued by the OECD, can make trustworthy AI more practicable. Regional mechanisms such as the ASEAN AI Safety Network illustrate how national capacity‑building can be combined with collective coordination. Developing‑world capacity can be enhanced through partnerships with advanced economies and firms (e.g., red‑team exercises). A deliberate slowdown of frontier AI development, coupled with greater transparency about lab activities, is advocated to manage existential risk. Investors’ influence on AI safety has diminished as leading AI firms become too large for typical private‑sector leverage.
Resolutions and action items
AI Safety Connect will continue its semi‑annual global convenings and plans a fourth edition at the UN General Assembly in New York. The OECD will expand coordinated transparency and incident‑reporting frameworks, aiming toward an international AI incident‑response centre. The OECD AI Policy Observatory will keep updating the open‑source safety‑tool catalog and promote peer learning. Singapore will publish a refreshed AI safety research priority list and advance development of testing tools within the next 12 months. ASEAN will operationalise its AI Safety Network, with concrete steps to build enforcement agencies and sustain political will over the next 12‑18 months. The World Bank will deepen collaborations with high‑capacity economies and tech firms to provide red‑team and safety‑by‑design support to Global South clients. All participants called for allocating dedicated funding to embed safety architecture from the design stage of AI systems. A public‑awareness campaign and transparency push were suggested to create pressure for a slowdown of frontier AI development.
Unresolved issues
Specific governance model and legal authority for an international AI incident‑response centre remain undefined. How to achieve enforceable, interoperable standards across jurisdictions without creating excessive compliance burdens. Mechanisms for effectively involving private investors in safety governance were not clarified. Funding sources and sustained financial commitments for regional bodies like the ASEAN AI Safety Network are still uncertain. The exact trade‑offs between safety measures and innovation speed, especially for smaller states lacking jurisdiction over AI origins, need further analysis. Details on how to translate scientific findings into actionable policy frameworks across diverse regulatory environments were not resolved.
Suggested compromises
Introduce occasional, targeted pauses in AI development to allow testing, auditing, and confidence‑building before further scaling. Adopt a phased, coordinated incident‑reporting system that protects companies from legal or commercial penalties while sharing near‑miss data. Leverage middle‑power pooled resources and normative influence to bridge gaps between fast‑moving AI labs and slower policy cycles. Combine national capacity‑building with regional enforcement mechanisms (e.g., ASEAN network) to balance sovereignty with collective safety. Encourage transparency from leading AI labs as a condition for continued investment and market access, aligning private incentives with safety goals.
Thought Provoking Comments
AI safety is not keeping pace with the rapid deployment of frontier AI; we need a faster tempo for safety discussions, convening every six months and doing capacity‑building and trust‑building behind closed doors.
Frames the core problem as a timing mismatch between technology progress and governance, justifying the creation of AI Safety Connect and the need for continuous, structured global engagement.
Sets the agenda for the whole panel, prompting other speakers to address coordination mechanisms, incident reporting, and the role of middle powers in closing the safety gap.
Speaker: Nicolas Miailhe
The harms of AI—whether psychological damage or loss of human control—cross borders, so global coordination is essential; this is why we are holding the summit in India, a champion of universal participation.
Highlights the transnational nature of AI risks and links the location of the summit to the principle of inclusive governance, shifting the conversation from technical challenges to geopolitical coordination.
Leads directly to Eileen Donahoe’s focus on middle‑power agency and frames the subsequent discussion around international diplomacy rather than purely technical solutions.
Speaker: Stuart Russell
Middle powers and global‑majority states can shape AI safety through pooled resources, market leverage, normative influence, and regulatory innovation—‘leading from the middle’ may be more powerful than previously anticipated.
Introduces the novel concept that non‑superpower nations can drive safety standards, expanding the conversation beyond the usual US‑EU‑China focus.
Triggers targeted questions to the OECD Secretary‑General, Singapore’s Minister, and Malaysia’s Minister about how their countries can operationalise this middle‑power leadership.
Speaker: Eileen Donahoe
Trust is built through inclusion and objective evidence; we must sometimes pause, test, audit, and share information to build confidence that AI systems respect fundamental rights.
Combines a practical governance recommendation (pausing and auditing) with a principle (inclusion) and links it to the need for coordinated transparency and incident reporting.
Sets up the later emphasis on incident‑reporting frameworks and the proposal for an international AI Incident Response Center, influencing both Singapore’s and Malaysia’s responses about standards and enforcement.
Speaker: Mathias Cormann
We need to translate scientific knowledge into effective policy, understanding trade‑offs and testing requirements—just as aviation safety required rigorous distance standards for A380 take‑offs, AI safety needs interoperable standards and extensive testing across contexts.
Provides a concrete analogy that clarifies how scientific evidence must be operationalised into policy, emphasizing the need for standards that work across diverse environments.
Deepens the discussion on practical implementation, prompting calls for refreshed AI safety research priorities and better testing tools in the 12‑month window.
Speaker: Josephine Teo
Standards and regulations are useless without an agency that can enforce them; otherwise they remain strong on paper but have no impact.
Shifts focus from normative frameworks to institutional capacity, highlighting enforcement as the critical missing piece in AI governance.
Leads other panelists to stress the need for sustainable institutions and aligns with the OECD’s call for coordinated incident reporting and the World Bank’s emphasis on building safety architecture from the ground up.
Speaker: Gobind Singh Deo
AI is the ‘sphere’ that can penetrate any shield, but we can also build stronger shields using AI itself; the solution lies in close collaboration with advanced economies and companies to learn defensive techniques.
Uses a vivid metaphor to illustrate the dual nature of AI as both threat and defense, underscoring the necessity of knowledge transfer from high‑capacity actors to the Global South.
Reinforces the panel’s theme of partnership, prompting discussion on how the World Bank can facilitate such collaborations and supporting the call for shared incident‑response mechanisms.
Speaker: Sangbu Kim
The biggest danger is the race inside labs to create superintelligence; a prohibition would require broad scientific consensus and strong public buy‑in, and political pressure can make such a prohibition feasible.
Introduces the radical idea of an effective prohibition on superintelligent AI development, linking it to public mobilisation and political leverage rather than technical safeguards alone.
Creates a turning point by moving the conversation from coordination and standards to the possibility of outright bans, prompting follow‑up questions about investor influence and the practicality of slowing down development.
Speaker: Jann Tallinn
Investors no longer have leverage over leading AI companies because they are moving toward IPOs; the market will fund projects regardless, making investor‑driven safety interventions largely ineffective today.
Challenges the assumption that financial actors can steer AI safety, highlighting a structural shift in corporate financing that limits traditional governance levers.
Narrows the focus of the discussion on where influence can realistically be exerted—shifting attention back to governments, international bodies, and public pressure rather than private capital.
Speaker: Jann Tallinn
We have a narrow 12‑ to 24‑month window before frontier AI outpaces our ability to evaluate and govern it; priorities should include refreshing AI safety research priorities and developing practical testing tools.
Quantifies the urgency with a concrete timeline and proposes specific, actionable priorities, moving the abstract debate toward immediate deliverables.
Guides the concluding segment of the panel, aligning all participants on short‑term actions and reinforcing the earlier calls for incident reporting, standards, and institutional capacity.
Speaker: Josephine Teo
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that reframed AI safety from a purely technical problem to a geopolitical and institutional challenge. Stuart Russell’s emphasis on cross‑border harms set the stage for Eileen Donahoe’s middle‑power narrative, which was then fleshed out through concrete examples from the OECD, Singapore, Malaysia, and the World Bank. The most significant turning points were Mathias Cormann’s call for coordinated transparency and incident reporting, Josephine Teo’s analogy linking scientific evidence to policy, and Jann Tallinn’s provocative proposal of a prohibition on superintelligence development. These comments shifted the tone from descriptive to prescriptive, prompting participants to focus on enforcement mechanisms, short‑term actionable priorities, and the limits of investor influence. Collectively, the highlighted comments shaped a consensus that immediate, inclusive, and institutionally backed coordination—especially around incident reporting and testing tools—is essential within a narrow window before frontier AI capabilities outstrip governance capacities.

Follow-up Questions
What are the key lessons learned from building consensus and implementing AI safety frameworks, and what is the most critical piece of coordinated frontier AI safety infrastructure to build now (e.g., an international incident response center)?
Understanding past successes and prioritizing infrastructure is essential for effective global coordination of AI safety.
Speaker: Eileen Donahoe
Can Singapore and other middle powers bridge the coordination gap to keep scientific and safety channels open, and what is the most important step they can take in the next 12 months to establish a shared minimum understanding of frontier safety?
Middle powers can influence global governance; identifying concrete actions will help them shape AI safety outcomes.
Speaker: Eileen Donahoe (directed to Josephine Teo)
What lessons can other middle powers draw from Malaysia’s experience with the ASEAN AI Safety Network, and what concrete steps must ASEAN take in the next 12–18 months to move beyond aspirational goals?
Sharing best practices and defining actionable regional steps are crucial for coordinated AI safety across ASEAN.
Speaker: Eileen Donahoe (directed to Gobind Singh Deo)
How can the World Bank help Global South countries become active shapers of AI safety and reliability requirements before large‑scale deployment?
The World Bank’s involvement could accelerate capacity building and safe adoption of frontier AI in developing economies.
Speaker: Eileen Donahoe (directed to Sangbu Kim)
What would an effective prohibition on superintelligent AI development look like in practice, and how could it be implemented?
Clarifying enforcement mechanisms is vital for any moratorium to be credible and enforceable.
Speaker: Eileen Donahoe (directed to Jann Tallinn)
What would it take to bring investors meaningfully into the AI safety conversation?
Investors shape incentives; their engagement could align market forces with safety objectives.
Speaker: Eileen Donahoe (directed to Jann Tallinn)
What should be prioritized in the next 12–24 months to enhance AI safety and security?
Identifying short‑term priorities helps focus limited resources before capabilities outpace governance.
Speaker: Eileen Donahoe (asked to all panelists)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.