Why science metters in global AI governance

20 Feb 2026 18:00h - 19:00h

Why science metters in global AI governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Anil Ananthaswamy stating that effective governance requires understanding, and the UN Secretary-General highlighted the urgency of grounding AI policy in science [1-2][5-15]. Guterres announced the creation of an Independent International Scientific Panel on AI, describing it as independent, globally diverse and multidisciplinary, intended to provide a shared baseline of analysis for all countries [17-22][24-26]. He argued that science-led guardrails can protect human rights while accelerating innovation, and that a universal scientific language can align technical standards and reduce fragmented rule-making [27-33][38-43]. Emphasising human oversight, he said policy must be evidence-based, with clear accountability so decisions are not outsourced to algorithms [45-49].


In the fireside chat, Yoshua Bengio noted that AI scientists often disagree on future risks, making it essential to identify where evidence is strong and where uncertainty remains, similar to climate tipping-point debates [70-78]. He stressed that rapid AI advances create a lag between scientific findings and policy action, requiring neutral, accessible evaluations for policymakers [81-84]. Soumya Swaminathan compared the AI challenge to the COVID-19 response, urging rapid, globally coordinated evidence mechanisms and inclusive systems that reflect diverse contexts, especially in low-income settings [206-214][217-220]. Balaraman Ravindran highlighted the lack of data on AI’s social impacts in the Global South, citing education and agriculture as areas where evidence on effectiveness and equity is still missing [225-236][229-240].


Anne Bouverot argued that misunderstanding fuels fear, and that accurate scientific panels are needed to inform both citizens and policymakers, using past job-loss predictions as an example of how evidence shapes policy choices [250-275]. Ajay Sood described India’s National AI Governance Framework, which combines public-private partnerships, techno-legal design, and capacity-building to manage risks while scaling AI services [283-300]. Singapore’s Minister Josephine Teo reinforced the need for sustained research investment, a balance between speed and caution, and international cooperation to create interoperable standards, positioning the UN as the legitimate hub for such coordination [320-340][345-352].


Across speakers, there was consensus that scientific assessment, shared benchmarks, and inclusive dialogue are critical to prevent fragmented regulations and to operationalise high-level AI principles such as transparency and safety [33-34][331-340]. The discussion concluded that a UN-anchored, multidisciplinary scientific panel can bridge evidence and policy, making AI governance both effective and trustworthy for global development goals [45-49][55][345-346][354-357].


Keypoints


Major discussion points


Science as the foundation of global AI governance – The UN is building a practical architecture that puts science at the centre, creating an Independent International Scientific Panel to provide a shared baseline of analysis and interoperable technical standards so that “countries at every level of AI capacity can act with the same clarity” and “guardrails… can travel with the technology” [17-24][36-43][45-48].


Bridging the science-policy gap amid uncertainty and rapid change – Yoshua Bengio stresses that AI research shows “very rapid growth… uneven… surpassing most people on some measurements and being kind of stupid… on others,” creating a lag between scientific evidence and policy decisions; he argues for neutral, fact-based evaluations that recognise uncertainty, highlight severe-risk clues, and help policymakers act despite limited proof [67-78][81-86].


Industry’s role in fostering a common, evidence-based understanding – Brad Smith warns that debates often stall because “people don’t have a common understanding of the problem” and are “too quick to want to blame someone,” urging a shift from hype to facts and emphasizing that the UN is the best platform to build that shared scientific basis [103-149][143-148].


Ensuring inclusivity and equity, especially for the Global South – Both Bengio and panelists highlight the need for a globally diverse, multidisciplinary panel that “makes sure that everyone is at the table and no one is on the menu,” and stress that evidence must be actionable for low-income contexts (e.g., COVID-19 experience, AI impacts on youth in India) and that equity should be at the heart of AI for the public good [90-94][213-218][225-236][320-334].


Concrete steps: benchmarks, capacity-building, and operationalising principles – Singapore’s Minister Josephine Teo outlines concrete investments (a $1 billion AI R&D plan, a digital trust centre, AI safety institute) and calls for “standardized evaluation methodologies,” international cooperation on interoperable tools, and capacity-building so that high-level AI principles become actionable across jurisdictions [320-345].


Overall purpose / goal


The session was convened to launch and explain the United Nations’ new science-driven framework for AI governance-particularly the Independent International Scientific Panel-and to explore how robust, globally-shared scientific evidence can bridge the gap between rapid AI innovation and responsible policy, while ensuring inclusive participation from all regions and sectors.


Overall tone


The discussion began with a formal, urgent tone emphasizing the need for scientific grounding ([1-4], [17-24]). It shifted to a reflective, technical tone during the fireside chat, acknowledging uncertainty and the difficulty of translating science into policy ([67-86]). The industry contribution added a pragmatic, cautionary tone, warning against hype and urging common understanding ([103-149]). Throughout, the tone remained constructive and collaborative, moving toward optimism as panelists highlighted concrete initiatives and global cooperation ([320-345]). No major negative or confrontational shifts were observed; the conversation consistently aimed at building consensus and actionable pathways.


Speakers

António Guterres


Role / Title: Secretary-General of the United Nations


Areas of Expertise: International diplomacy, multilateral cooperation, AI governance leadership


Sources: [S3]


Anil Ananthaswamy


Role / Title: Moderator / Host, Author of The Elegant Math Behind Machine Learning


Areas of Expertise: Science communication, machine learning, public engagement


Sources: [S26]


Brad Smith


Role / Title: Vice Chair and President, Microsoft Corporation


Areas of Expertise: Technology policy, AI regulation, corporate leadership, privacy & cybersecurity


Sources: [S14], [S15]


Yoshua Bengio


Role / Title: Professor, Université de Montréal; leading AI researcher


Areas of Expertise: Deep learning, AI safety, machine learning research


Sources: [S19]


Balaraman Ravindran


Role / Title: Professor, Indian Institute of Technology Madras; Member, International Independent Scientific Panel on AI


Areas of Expertise: AI, machine learning, applications in agriculture and education, AI policy implications for the Global South


Sources: [S20]


Ajay Sood


Role / Title: Principal Scientific Advisor to the Government of India


Areas of Expertise: National AI governance, digital public infrastructure, techno-legal frameworks, AI risk assessment


Sources: [S23]


Amandeep Singh Gill


Role / Title: Under-Secretary-General and Special Envoy for Digital and Emerging Technologies, United Nations


Areas of Expertise: Digital policy, emerging technologies, multilateral coordination, science-policy interface


Sources: [S11], [S12], [S13]


Anne Bouverot


Role / Title: France’s Special Envoy for Artificial Intelligence; former Director General, GSMA


Areas of Expertise: AI policy, digital trust, telecommunications, AI ethics and governance


Sources: [S9], [S10]


Soumya Swaminathan


Role / Title: Former Chief Scientist, World Health Organization


Areas of Expertise: Global health, evidence-based policy, pandemic response, scientific advisory leadership


Sources: [S1], [S2]


Josephine Teo


Role / Title: Minister for Digital Development and Information, Singapore


Areas of Expertise: Digital policy, AI R&D investment, AI safety and trust infrastructure, international AI governance


Sources: [S6], [S7], [S8]


Additional speakers:


– None. All speakers appearing in the transcript are covered by the provided speakers-names list.


Full session reportComprehensive analysis and detailed insights

The session opened with Anil Ananthaswamy reminding the audience that “we cannot govern what we do not understand” and introducing United Nations Secretary-General António Guterres, whose leadership places science and multilateral cooperation at the heart of AI governance [1-4][5-15]. Guterres framed the challenge as a race against “AI innovation moving at the speed of light, outpacing our collective ability to fully understand it” and argued that policy must be built on trusted facts rather than hype or disinformation [10-16]. He noted that the UN has been “indispensable to not just the protection of people, but the preservation of our species” by helping humanity live with nuclear weapons without using them [120-124]. He announced the creation of an Independent International Scientific Panel on Artificial Intelligence, describing it as “fully independent, globally diverse and multidisciplinary” and intended to give every country, regardless of AI capacity, a clear analytical baseline [17-24][25-30]. The Secretary-General stressed that science-led guardrails protect human rights, preserve agency and accelerate innovation, positioning science as a “universal language” that can create interoperable technical standards so that a startup in New Delhi can scale globally with confidence [31-34][38-43][45-49].


In the subsequent fireside chat, Professor Yoshua Bengio highlighted the difficulty of reaching consensus among AI researchers, noting that “Scientists themselves don’t always agree on what to expect for the future” [70-71]. He argued that a neutral, fact-based synthesis is required to provide a shared understanding for policymakers and used a climate-tipping-point analogy to illustrate why precaution is needed even when evidence is incomplete: “if the risk has huge severity… then policymakers need to pay attention” despite a lack of proof [72-78]. Bengio also pointed out that the rapid, uneven growth of AI capabilities creates an inevitable lag between scientific publications and policy action, because studies involving people can take months while AI systems evolve week by week [81-84]. He suggested that governance should focus on “high-level principles that can be applied without having to go into the details because the details are going to change” [85-88]. This stance contrasts with Guterres’ emphasis on “aligning technical baselines, shared testing and risk measurement” to ensure interoperability and safety across borders [44-48].


Brad Smith, Vice-Chair and President of Microsoft, reinforced the need for a common, evidence-based understanding, warning that “people don’t have a common understanding of the problem” and that debates often devolve into blame without first agreeing on the problem’s context [143-148]. He invoked an 80-year economic-cycle theory to argue that the United Nations, created just over 80 years ago, remains humanity’s “greatest accomplishment” and an indispensable platform for coordinating AI governance [103-112][126-129]. Smith also noted that the UN has helped humanity “live with the ever-constant presence of nuclear weapons without using them,” echoing Guterres’ point on existential risk management [115-118]. He criticised the culture of grandiose predictions, noting that his own grading of industry forecasts yielded an average accuracy of only 25 % and that “there is no such thing as a crystal ball” [152-166][167-168].


Dr Soumya Swaminathan underscored the importance of inclusive, rapid evidence generation by comparing the AI challenge to the COVID-19 response. She described how, during the pandemic, her team reviewed “a couple of hundred publications every day” to issue timely recommendations, and she called for a global scientific body-“something like the IPCC” for AI-to provide fast, trustworthy evidence that can be adapted to diverse national contexts [206-214][217-220]. She warned that without such mechanisms, policy may be made in advance of evidence, risking irrelevance or harm, and emphasized that “policy must change when evidence becomes clear” [218-220].


Representing the Global South, Professor Balaraman Ravindran highlighted the paucity of data on AI’s social impacts in India, questioning how AI affects youth, children’s mental health, and agricultural productivity and noting that most stories come from the West and that “we don’t have evidence of AI interventions” in education or farming [229-236]. His remarks illustrate the need for locally-generated benchmarks to evaluate AI’s effectiveness and equity, especially in low-resource settings [229-236].


Anne Bouverot, France’s Special Envoy for AI, echoed the theme that misunderstanding fuels fear. She quoted Marie Curie-“nothing in life is to be feared, everything is to be understood”-to argue that scientific panels are essential for both citizens and policymakers [250-259]. Bouverot also cited past job-loss predictions, referencing both the Oxford and Elon Musk forecasts, to show how divergent scientific forecasts lead to vastly different policy responses, from universal basic income to reskilling programmes, underscoring the necessity of accurate, evidence-based forecasts [268-275].


Ajay Sood described India’s National AI Governance Framework, which combines public-private partnerships, “techno-legal” design, and capacity-building to embed governance directly into AI systems, mirroring the country’s earlier digital public-infrastructure experience [283-300]. He acknowledged the current uncertainty about AI risks but argued that embedding safeguards at the technical level offers a pragmatic path forward [291-300].


Singapore’s Minister Josephine Teo presented a concrete investment agenda, noting a US$1 billion national AI R&D plan that funds foundational and applied research on responsible AI [320-322]. She described the Digital Trust Centre and the AI Safety Institute as national assets that operationalise safety standards. Teo stressed the need to balance rapid AI development with careful, evidence-based policy, arguing that “both impulses are necessary” and that international cooperation is essential for interoperable standards [323-326][327-334]. She reiterated the UN’s unique legitimacy for global AI discourse, citing the UN High-Level Advisory Body on AI report (published end-2024) as the basis for the new Independent International Scientific Panel, and warned that operationalising high-level AI principles through standardized evaluation methodologies and capacity-building for all countries is the current challenge [335-345]. Regional actions announced included Singapore’s hosting of the International Scientific Exchange on AI Safety (first edition) and its second edition on 17-18 May [350-353], the Singapore AI Safety Red-Team Challenge (the first multicultural, multilingual exercise for the Asia-Pacific), Singapore’s chairmanship of the ASEAN Work Group on AI Governance and the development of the ASEAN Guide on AI Governance and Ethics (extending to generative AI) [360-363], and an India-wide collaboration on the International Network for Advanced AI Measurement, Evaluation and Science for joint testing efforts [364-367].


Moderator Amandeep Singh Gill framed the discussion as a “science-evidence-policy loop,” opening with the technical observation that “≈ 90 % of AI is matrix multiplication; a 0.01 % improvement in its efficiency has huge energy implications” [190-192]. He linked the Independent International Scientific Panel’s work to turning “facts and evidence” into a reliable engine for the Sustainable Development Goals [201-206][309-312]. Gill’s rapid-fire round reinforced the consensus that science must be central, that common technical baselines are vital for interoperability, and that inclusive evidence-generation is essential for equitable outcomes [241-246][309-312].


In conclusion, the participants reached broad agreement that science is the indispensable foundation for AI governance and that the UN-anchored Independent International Scientific Panel will provide the neutral, multidisciplinary evidence needed to bridge the gap between fast-moving technology and responsible policy. Action items include fast-tracking the panel’s first report ahead of the Global AI Governance Summit in July [22-24]; Singapore’s commitment to host the second International Scientific Exchange, develop regional safety benchmarks, and advance the ASEAN Guide and the International Network for Advanced AI Measurement [350-353][360-363][364-367]; Microsoft’s pledge to devote resources to UN-led scientific efforts [182-183]; and India’s rollout of its National AI Governance Framework with techno-legal safeguards [283-300]. The session closed with Minister Josephine Teo reaffirming the UN’s role as the legitimate hub for global AI discourse, urging continued collaboration to turn scientific insight into trustworthy, inclusive governance [335-345][350-353][354-357].


Session transcriptComplete transcript of the session
Anil Ananthaswamy

Today’s session begins from a simple but powerful premise. We cannot govern what we do not understand. It is my honor to open this session with a special address by the Secretary General of the United Nations, whose leadership has placed science and multilateral cooperation at the forefront of global AI governance. So please join me in welcoming His Excellency Antonio Guterres.

António Guterres

Thank you very much. There is a computer here. I don’t know to whom it belongs. Excellencies, ladies and gentlemen. Thank you for joining this discussion on the role of science in international AI governance. We are barreling into the unknown. AI innovation is moving at the speed of light, outpacing our collective ability to fully understand it, let alone govern it. AI does not stop at borders, and no nation can fully grasp its implications on its own. If we want AI to serve humanity, policy cannot be built on guesswork. It cannot be built on hype or disinformation. We need facts we can trust and share across countries and across sectors. Less noise, more knowledge. That is why the United Nations is building a practical architecture that puts science at the center of international cooperation on AI.

Thank you for watching. and it starts with the Independent International Scientific Panel on Artificial Intelligence. This panel is designed to help close the AI knowledge gap and assess the real impacts of AI across economies and societies so countries at every level of AI capacity can act with the same clarity. It is fully independent, it is globally diverse, and it is multidisciplinary because AI touches every area of every society. And I’m delighted that the General Assembly of the United Nations confirmed the 40 experts I proposed to member states. Now the real work begins on a fast track to deliver a first report ahead of the Global Summit. The Global Dialogue on AI Governance in July. The panel will provide a shared baseline of analysis.

helping member states move from philosophical debates to technical coordination, and anchor choices in evidence so policy is neither a blunt instrument that stifles progress nor a bystander to harm. That is how science transcends decision -making. When we understand what systems can do and what they cannot, we can move from rough measures to smarter, risk -based guardrails. Guardrails that protect people, uphold human rights, and preserve human agency. Guardrails that build confidence and give business clarity so innovation can move faster in the right direction. Science -led governance. Governance is not a brake on progress. It is an accelerator for solutions. A way to make progress safer, fairer, and more widely shared. It helps us identify where AI can do the most good the fastest.

And it helps us anticipate impacts early, from risks for children, to labor markets, to manipulation at scale. So countries can prepare, protect, and invest in people. Today, international cooperation is difficult. Trust is strained, and technological rivalry is growing. Without a common baseline, fragmentation wins, with different regions and different countries operating under incompatible policies and technical standards. A patchwork of rules will raise costs, weaken safety, and widen divides. Science is a universal language. Guided by the independent panel and the global dialogue on AI governance, we can align with the world. We can align our technical baselines. When we agree on how to test systems and measure risk, we create interoperability. So a start -up in New Delhi can scale globally with confidence because the benchmarks are shared, and safety can travel with the technology.

Finally, let us be clear. Science informs, but humans decide. Our goal is to make human control a technical reality, not a slogan. And that requires meaningful human oversight in every high -stakes decision, injustice, health care, credit. And it requires clear accountability so responsibility is never outsourced to an algorithm. People must understand how decisions are made, challenge them, and get answers. Excellent. Thank you, ladies and gentlemen. The message is simple. Less hype, less fear. More facts and evidence. Guided by science, we can transform AI from a source of uncertainty into a reliable engine for the sustainable development goals. Let us build a future where policy is as smart as the technology it seeks to guide. Thank you.

Anil Ananthaswamy

Thank you, Secretary General, for those inspiring opening remarks. Ladies and gentlemen, we were going to have Mr. Brad Smith. Vice Chair and President of Microsoft Corporation as our next speaker, but he’s running a bit late, so we will move to the next item in the agenda. I would like to welcome Professor Yashwa Bengio to the stage, Scientific Director of MILA and one of the world’s leading AI researchers. He and I will be in a fireside chat and we’re hoping that Mr. Brad Smith will be able to join us very soon. Thank you. So, welcome Professor Bengio.

Yoshua Bengio

Thank you for having me.

Anil Ananthaswamy

Our pleasure. So, you are the most cited computer scientist And I looked it up. You’re actually the most cited living scientist today and have played a unique role at the global science policy interface, including through the UN Scientific Advisory Board and your leadership of the International AI Safety Report. So from your perspective, how do these science policy interfaces actually work in practice and where do they add the most value?

Yoshua Bengio

So it’s tricky, right, because there are many different views, especially different interests in business, in different governments. And the role of science, the role of a kind of synthesis of science that we want for the UN panel, that we have seeked for the… AI Safety Report. is to try to make it, to provide a shared understanding as a basis for those political discussions and not be influenced by as much as is humanly possible by those tensions that exist in our societies. And I think it’s particularly important because maybe unlike in the case of climate, the scientists themselves don’t always agree on what to expect for the future or even how to interpret the science that exists.

I just want to add something. So something that’s a little bit subtle about this kind of exercise is that to be able to recognize the uncertainty and the divergences that exist, and where is it that scientists agree, where is it that the evidence is strong, where is it that we have clues that matter. Even if we’re not certain about a particular risk, we might have clues about it. But if the risk has huge severity, in other words, if it does unfold, then it could be catastrophic, then policymakers need to make attention. And it’s always difficult when we don’t have proof that something terrible is going to happen. Maybe a good analogy is tipping points in climate, right?

Because there’s not enough past evidence to be sure that a particular tipping point is going to happen. So the situation is similar in AI in the sense that we don’t have the experience of, say, machines that are really smart and can change society, and be even potentially smarter than us. So how can we deal with the right policy decisions? but that’s why it is so important to have as neutral and as fact -based evaluation of what is going on available to everyone and in a language that is accessible to everyone and of course for policy makers which by the way is difficult for scientists to achieve they need help, they need iterations they need feedback from people who are used to the interface between science

Anil Ananthaswamy

Is there anything in particular about the highly technical nature of AI and also the pace of change that makes this interface particularly difficult?

Yoshua Bengio

Yes, yes The facts shown in the scientific benchmarks across labs, companies and academia on AI show very rapid growth of the capabilities of these systems and the capabilities of AI and the capabilities of AI and the capabilities of AI and the capabilities of AI and the capabilities of AI and that growth is uneven so we see AIs even surpassing most people on some measurements of capability and being kind of stupid or like a six -year -old on some other things so it’s very difficult to grasp what that means but because it’s moving so fast there’s always going to be a lag between even like the scientific papers take time to be written if there are studies so think about studies that involve people they’re going to take months and so by the time we start seeing clues that there’s a potential problem so you can think of something recent that was not expected like the psychological effects on people of these chatbots we now have lots of anecdotal evidence and we’re only starting to see the scientific studies and of course on the policy side it’s going to be even more difficult even later because those discussions are going to happen after we see scientific evidence so there is going to be a lag and that’s a real problem because things could move

Anil Ananthaswamy

So maybe that leads well into our next question. We often hear that AI governance is moving too slowly and from your experience, what kinds of scientific assessments or benchmarks could realistically keep pace with this rapid change?

Yoshua Bengio

Yeah, that’s a great question. My opinion on this is that we should be thinking about not just policy and the usual sense of coming up with principles, but we should try to strive for high -level principles that can be applied without having to go into the details because the details are going to change. And the second thing is I think we should strive for technology that are going to help to implement those guardrails in the field, in the deployment of AI, because otherwise there’s not enough time to

Anil Ananthaswamy

Well, thank you for those insights. And also congratulations on your recent appointment to the Independent International Scientific Panel on AI. In a few words, how do you see this new panel helping to strengthen the link between science and global AI policymaking?

Yoshua Bengio

Well, I think there’s something really important about this panel, and it’s global aspect and being rooted in the UN. And the reason I’m saying this is that AI is going to be transforming our world very clearly, and it’s going to have global effects, whether it is on the good side, the benefits are on the risks, but also the kind of power relationships that are going to be changing in the future. And I’m personally very concerned about how this will unfold for developing countries in the global south. And we need to work. Thank you. in a multidisciplinary array so that we can foresee those effects and we can start discussions to make sure that everyone is at the table and no one is on the menu.

Anil Ananthaswamy

Well said, Professor Bengio. Well, thank you very much for kick -starting our discussion. We will now turn to our panel. So, ladies and gentlemen, it is essential that discussions about AI policy include the voices of key industry actors, and I am pleased to invite Mr. Brad Smith, Vice Chair and President, Microsoft Corporation, for his keynote address.

Brad Smith

Well, good morning, everyone. It’s a pleasure to be here. My apologies for being a few minutes late. I want to offer a couple of thoughts this morning. The first thing I think we should come together to think about is that, in my opinion, this is a moment in time when we need to reflect on and reinvest in the importance of the United Nations. There is a well -known economic theory that says that humanity is, in many ways, almost destined to repeat its great economic mistakes every 80 years. The reason it’s 80 years is because that is basically the lifespan of human beings. And so every 80 years, almost everyone who had any living memory of a prior financial calamity has left the planet.

If you look at the Great Recession that started in 2008, what you realize is that it happened 79 years after the stock market crash that led to the Great Depression in 1929. And you can follow this series of financial mistakes all the way back to the bursting of the tulip bubble in the Netherlands hundreds of years ago. I think there is a corollary worth thinking about. Just as there is a risk that humanity forgets the mistakes it made 80 years ago, humanity runs the risk of forgetting the great successes. it created 80 years ago. It was just over 80 years ago that the world came together to create the United Nations. It was, in my opinion, one of humanity’s greatest accomplishments of the 20th century.

It is a unique organization in a very imperfect world. And so, of course, on any day and any year, it is possible for anyone to blame the United Nations for the imperfections that we see all around us. But the truth is this. Those imperfections are fewer, and their consequences are less disastrous, in my view, because of the United Nations. And one of the great things about working at Microsoft in a job like Microsoft, in my opinion, is that I get to work in a global organization. We have subsidiaries in 120 countries. We do work in 190 countries. We see the world. It turns out that everywhere we go, we see the United Nations. Sometimes it’s the United Nations Development Program, working to foster economic development.

Sometimes it is UNHCR, helping refugees. Sometimes it is the UN Office of Human Rights, seeking to protect human rights. But the truth is, if there’s a problem, the United Nations is almost always part of the solution. We need to remember this. And we need to remember that however challenging the last 80 years have been, we have managed, as humanity, as a species, to live. with the ever -constant presence of nuclear weapons without using them or destroying ourselves. The United Nations has, in fact, in my view, been indispensable to not just the protection of people, but the preservation of our species. Why does that matter now? Why should we talk about it today and this week in Delhi?

Well, because here we are on the cusp of the future. A technology that we all know will likely change the future. Here we are in the second month of the second quarter of the 21st century, and we need to focus on how we bring the institutions on which we rely into that future. So then let me talk about a second aspect that I think is so important to think about this month. One of the things I’m constantly struck by… leading a global organization is how often everyone disagrees with each other about almost everything. But one of the things I’ve learned along the way is that I think one of the reasons people so quickly disagree is that we rush so quickly to debate competing solutions.

This happens in domestic politics. It happens in international diplomacy. It, frankly, happens in a global company. It actually happens everywhere, even in families. As soon as there’s a problem, people want to talk about the solution. And then people have different solutions, and then they debate, and they disagree, and they argue, and sometimes it’s even worse than that. One of the things I’ve learned is the reason people so often disagree about the solution is they don’t have a common understanding of the problem. They don’t spend enough time talking about the problem. They don’t have a shared contextual understanding. of the problem they’re trying to solve. They’re too quick to want to blame someone for the problem, and then that spirals into a discussion that becomes completely unconstructive.

Why does that matter today? Because what we’re here to talk about today is all about creating a more common understanding together based on science of where artificial intelligence is going. This is an indispensable tool. Indeed, it’s a critical service for humanity so we can all learn together, we can all think together, we can all understand together what is going on in the world. I think it’s especially critical, to be honest, when it comes to artificial intelligence because I think if you even communicate, consider most of the conversations you have about this technology. I would argue that it has two flaws. The first flaw is it usually involves people making very grandiose predictions about the future.

You know what? I’ve worked in the tech sector for 32 years. I have listened for more than three decades to my colleagues in my industry around the world make bold predictions about the future. No one ever holds them accountable a decade later for whether they were right or wrong. I used the researcher agent in Microsoft Copilot a couple weekends ago, and I loaded a lot of names. I won’t say whom, but you can guess. And I said, look at all the predictions they made about all the technologies, and look at the predictions they made about when these technologies would come to do something or another, and give them a grade. The average grade was 25%. You couldn’t even get close to the top.

You were at the bottom. So let’s just understand one thing together. There is no such thing as a crystal ball. No one has one. But what we do have is the ability to understand where we are today. And what we do have is a better understanding to just appreciate what is happening each and every year. There is a second flaw, in my view, in many of the conversations that take place, including at this AI summit. Everybody wants to talk about how they’re going to make machines smarter. That’s interesting. I think it’s interesting to imagine living in a world where a data center is like a country of geniuses. But as I mentioned yesterday, compared to the people who lived in the Bronze Age, we’re all geniuses.

We’re all geniuses already. What that should remind us… is that human capability is neither fixed nor finite. And so what really matters, in my opinion, is not whether we are going to build machines that are smarter than humans. Yes, in some ways we will. But how will we use those machines to make people smarter, to help us do what we need to do? That is what this effort is all about. Wow. Let’s harness the power of science to build a common understanding of what is changing each year, and then let’s connect it with the global dialogue on governance so we can pursue policies that will ensure that this technology serves people. There’s no better place to get started than here.

There’s no better time than now. And let’s face it, there is no better institution on the planet that can do more to serve humanity and protect the world. than the United Nations. And on behalf of Microsoft, I just want you to know we are putting our full energy and resources to do everything that we can to help. Thank you very much.

Anil Ananthaswamy

Thank you. Thank you, Mr. Smith, for those insights on responsibility, accountability, and the role of industry. We now turn to our panel. Our panel brings together scientific leadership, public policy expertise, and international coordination. Please welcome to the stage our speakers, Professor Balaraman Ravindran, IIT Madras, Swaminathan, former Chief Scientist, WHO, Ajay Kumar Sood, Principal Scientific Advisor to the Government of India, and Anne Bouveraud, France’s Special Envoy for AI. I am also pleased to introduce our moderator, Amandeep Singh Gill, Undersecretary General and Special Envoy for Digital and Emerging Technologies. I invite him to guide the discussion. Thank you very much.

Amandeep Singh Gill

Thank you very much. Thank you, Anil, for leading us and for those who have not read his book, The Elegant Math Behind Machine Learning, please do have a go at it. We cannot govern something that is not possible. Something that we don’t understand. So something as simple as, like, if 90 % of AI is matrix multiplication, a 0 .01 % as he was explaining, improvement in efficiency of matrix multiplication has huge energy implications. So I want to welcome our esteemed panelists. The stage has been set by very inspiring keynotes and a fireside chat. So we will dive straight in. And since we are running a little short of time, I’m going to compress the two rounds into one rapid -fire round.

So all of you have worked on or are working on the science policy interface. And my sense is that there is a loop here, that there is a loop between science and evidence, and evidence and… and policy. And we want to explore that loop today in the context of the significant development of the setting up of the International Independent Scientific Panel at the United Nations. So I want to start with you, Soumya. You were the first chief scientist, first woman chief scientist at the WHO and worked at a very difficult time during the COVID when evidence, trusted evidence was so critical. So in your view, what makes this evidence that comes from science trusted and actionable for policymakers?

Soumya Swaminathan

The evidence is very rapid. The field is moving so rapidly. In COVID, we had to review a couple of hundred publications every day to understand what was happening on different aspects, on the virus, on the immunology, on how vaccines were working and drugs, and we had to make recommendations based on the best available evidence that day. I think we may be in a similar situation with AI, and it’s wonderful that the UN has now set up this body, which I see as something like the IPCC. I think we do need global governance. We need something like, you know, we’re talking now about preventing future pandemics by sharing data on pathogens, making sure that we have protocols in place where countries are willing to share that data, and also, of course, to share the tools, the vaccines or drugs when they become available, when or in case there is another pandemic.

Similarly, I hope that this scientific body that’s been set up by the UN would also establish systems that would, would link to national bodies and systems, and that would ensure the voices of all are heard. So one of the things during COVID was some of our recommendations were relevant. in high -income countries but not in low -income countries because the context is very different. And the WHO was criticized for this, I think rightfully so, and we need to learn from those mistakes. So it’s the voices, for example, of women, a low -income woman, a farmer in a remote place, is going to use technology very differently from a large farmer with access to lots of machines in Europe or North America.

So if AI has to work for everyone, then we need to make sure that those voices are heard. And ultimately, I think that loop you talked about, sometimes policy is made in advance of evidence. You have to. You can’t wait. But the policy must change. It must ask for the relevant evidence and be able to adapt when that is clear.

Amandeep Singh Gill

Thank you very much, Soumya. I’m going to come to you, Ravi, Professor Balaraman Rabindran. Now, as AI policies begin to take shape and you’ve been involved in some policymaking yourself, what signals from… regulators or public sector users should most urgently guide future AI research priorities? So in a sense, you know, the loop coming back into research.

Balaraman Ravindran

so thank you for that question so I mean AI right now especially in the global south so we don’t completely understand the implications of adopting AI and how is it going to affect the society, the people livelihood and everything in fact I also feel that we don’t have enough evidence about how AI is even affecting the social fabric how are children getting increasingly isolated with the adoption of AI and whether the effect is uniform between cities and rural India because the cultural setup is very different and so on and so forth so if the government as we heard our honourable prime minister say yesterday should focus more on youth and the impact of AI on youth what is the evidence do we have about what is happening in India so we hear stories about you know how there is dependence of on AI models of children and also people who are mentally challenged and so on and so forth who are under stress but all of these stories are coming to us from the west so what is it that’s happening in India so when we have these kinds of policy decisions that have to be made the government says that AI should be pushing efficiency in agriculture so do we have a benchmark in India that can evaluate the efficiency of effectiveness of these AI models in agriculture what are the kinds of flaws that happens when I for example build a bot that can act as a co -pilot for a farmer so these are bigger challenges so we have a lot of questions

Amandeep Singh Gill

if I can quickly follow up where do you actually see evidence for impact in the sustainable development goals space just a quick example or two

Balaraman Ravindran

so I I That was not in the notes he gave us earlier, so I have to think on my feet here. So let me take one thing that we are very familiar with, we are working on right now, is on the education space, right? So, for example, we don’t know, we don’t have evidence of AI interventions. How likely is it to change student learning behavior? So we have done some preliminary studies. So the author of the study is somewhere in the audience, because he has been sending me pictures of the stage. So what we have found out is the effectiveness of AI adoption is a direct function of habit. So if the students are using AI more, then they tend to…

But now I don’t know what is the causal factor there. I don’t know if the causal factor is whether they are using AI more, therefore they get better effect, or do they use AI more because they are getting better effect. So these are questions that we have to ask. Even in something as simple as education. I am saying simple because there is a lot of positive buzz around using AI in education. But even there, we need a lot more evidence to come.

Amandeep Singh Gill

Thank you, Ravi, and we’re honored to have you on the new International Independent Scientific Panel. So if I may jump to you, Anne, and you’re an AI scientist yourself. You know, all of us know you as a special envoy of President Macron, who made the February summit happen last year in Paris, but you’re also an AI scientist. So from your perspective, you kind of lived in these two worlds. So what works best for the interface? What kind of scientific evidence would you take to President Macron if you were to convince him to change the policy?

Anne Bouverot

Well, thank you for the question. I studied AI a long time ago, but I’m not really a scientist. But I try to understand, of course. Understanding, I think, is probably the very first thing. And before we move to policymakers, I think it’s for citizens, for us as human beings. The things that we don’t understand… We tend to be more afraid of. I often quote scientist Marie Curie. She wasn’t an AI scientist, but she’s one of the brightest scientists that we’ve had, two times Nobel laureate. And there’s a wonderful quote by her. She says, nothing in life is to be feared. Everything is to be understood. And now is the time to understand more because, of course, there are more things we can be afraid of at the time when she was living and now as well.

So trying to understand things, having scientific panels is definitely the right thing to do. And we’re fully supportive in France of the scientific panel. We’re very proud that Joëlle Barral is our nominee. She’s a scientist in AI and health and a member of the panel. This is absolutely excellent. So, yes, understanding things. is absolutely key. And then maybe just a second point to give an example of how understanding something or not can lead to very different policy decisions in the field of AI and work. We’ve had predictions. I remember in 2013, that was the previous AI revolution, but scientists, I believe, at Oxford said within 10 years, half of the jobs will disappear. We haven’t seen that.

At the AI summit in Bletchley Park, for very good reasons, we had frontier AI leaders in particular, Elon Musk saying within two years, half of the jobs will disappear. So, of course, the fact that this didn’t happen doesn’t mean that there isn’t a risk for work. Of course, there’s a risk for work. But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism. Basic income, what are we going to do with all the people who don’t have jobs? If what economists are saying is that 80 % of the jobs will be transformed, then the policy outcome is training, skilling, reskilling, and helping to educate people. That’s why listening to economists and having the International Labor Organization and other institutions really follow closely what is happening in which countries for younger people, for older people, for women, for men, for different types of jobs, that’s super

Amandeep Singh Gill

Merci beaucoup, Anne. Merci. And I’m going to turn to you, Professor Sood. You occupy an important position within the Indian system, and you look at science broadly. And India has deployed some of these technologies at societal scale. India stack the digital public infrastructure. So how do you look at the AI opportunity, and importantly, how do you look at AI risks? And how are you prioritizing R &D allocations to harness the opportunities, manage the risks?

Ajay Sood

Thank you very much for having me on the panel. As you know that all the aspects which you asked, we have had very extensive consultations across all stakeholders. And we came out with the National AI Governance Framework, not the regulatory framework, but how do we really handle AI, all aspects. And there we have looked at how do we enable the compute facility, compute resources to our people. Because we are not at the scale when a few trillions of dollars are being invested. So we came out with some framework which we think with public -private partnership we could enable it. And we could see the results of that within a year as demonstrated in AI Summit.

Summit, the release of AI, so on, models and so on. Other aspect which is very important, as you rightly said, the risk assessment. So this is where, as has been mentioned, our experience with the digital public infrastructure, which has been rolled on a very public scale with the safety and security, which is as difficult as in AI. AI, of course, is more difficult. We still do not know the risks. But when we were dealing with the digital public infrastructure, either for the financial transactions or for identity, identity verification and so on, it was a challenge. And that was done by embedding governance through technical design. And this is what we call techno legal, which Honorable Prime Minister said in the Paris summit.

And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everything is laid out. We will need framework for that. We will need technologies for that. But this is one way which will have a smooth. interaction if we can bring this technological framework.

Amandeep Singh Gill

Thank you so much for those insights. And now that since we are running out of time and I’m going to discriminate against the men on the panel so my apologies in advance. So I’m going to turn back to you Soumya and Aan for like 40 second, 30 second reflection. What do you think in terms of the pace and direction of technology opportunities including for accelerating scientific discovery and risk. What would be your advice for the international independent scientific panel maybe Anne you can go first 40 seconds.

Anne Bouverot

Yes I think AI has a strong potential for helping science we’ve seen that with the two Nobel prizes in physics and chemistry a year back. There’s many more areas in science where AI can help. It can only be possible if we have databases of scientific data that are available to the world and that are constructed by scientists and funded by governments and international institutions around the world. So this is a very important topic for research.

Amandeep Singh Gill

Thank you, Anne. Soumya, you have the last one.

Soumya Swaminathan

Yes, I agree very much with Anne. And I think that the scientific panel could actually help network many more groups of scientists from around the world, perhaps sectorally, for example, what’s happening in health, what’s happening in education, what’s happening in agriculture, looking at the evidences as they emerge, encouraging research, setting priorities, but also looking at safety and risks, because I think that’s going to be very important. There may be unanticipated risks and harms that we have not considered. And, of course, equity, being a UN -led panel, ensuring that equity is at the heart of AI and it’s being done for public good.

Amandeep Singh Gill

fantastic thank you that’s a great closing ladies and gentlemen please join me in thanking our outstanding panel and we are going to move straight to the closing over to you Anil

Anil Ananthaswamy

thank you to the panel for a rich and forward looking discussion to close this session it is my honor to invite Josephine Teal minister for digital development and information of Singapore to deliver the closing remarks minister Josephine Teal

Josephine Teo

good morning everyone first allow me to thank the secretary general for his remarks and it serves as a very useful guidance to all of us working in this important technology for the closing this morning I thought that it would perhaps be useful to offer a perspective from a small state Singapore has a population of just 6 million people and more than 30 years ago at the UN we became the convener of the Forum of Small States which still has about 108 members I will just make three points on how we look at developments on this front The first point is that we believe in AI being used as a force for the public good but to do so, it is important that we continue to invest in the science that underpins it and ground trust in evidence This certainly requires sustained investment in research and is also the reason why we set aside a billion dollars in a national AI R &D plan which will include foundational and applied research into responsible AI We believe in it and we have to put money behind this effort There are of course other investments such as in building up a digital trust center.

It’s our designated AI safety institute that has been participating in important conversations on this topic, as well as setting up a center for advanced technologies in online safety. So those are just some of the efforts that we can dedicate resources to doing as a small state. The second point I want to make is that there is almost always going to be a tension between moving quickly, given the pace of AI development, and moving carefully, giving the latest evidence that presents themselves on what we should be paying attention to. Both impulses are necessary, and we believe it is not impossible to try and balance them through integration of science and policy. It is not easy, but it is not an effort that we must give up on.

I should just add that on this score, it will be much better if we can cooperate internationally to develop sound approaches that can also be interoperable across different jurisdictions. And this is one effort that we believe underpins the work that is being carried out by the UN. And this brings me to my third point. I want to highlight the important role that an organisation like the United Nations plays in facilitating global discourse to bridge science and policy. I cannot overemphasise the importance of this effort. We must recognise that global AI governance landscape is becoming increasingly fragmented. There are multiple initiatives, frameworks and institutions. The UN’s unique value lies in your legitimacy and inclusiveness to encourage interoperability across efforts.

The Secretary -General talked about this too. We therefore welcome… We welcome the establishment of the… independent international scientific panel on AI, building on the work of the UN High -Level Advisory Body on AI, which published its report on governing AI for humanity at the end of 2024. We note that the panel’s multidisciplinary approach, covering machine learning, applied AI, social science, ethics, all of these are necessary to address the complexity of AI governance challenges. Finally, I would just like to acknowledge that we now have substantial convergence on the high -level AI principles. Yoshua talked about this. Transparency, accountability, fairness, safety. But the challenge is in operationalizing them. We need to find standardized evaluation methodologies that work across different regulatory contexts.

We need capacity building so that all countries can meaningfully engage with the technical and the technical challenges. We need to work with the technical evidence and not just with the large AI research ecosystems. I would encourage all stakeholders to view scientific input not as a constraint on policy flexibility, but as a constraint on policy flexibility. as a foundation for more durable, effective governance that can maintain public trust. We need to keep the conversations going, one where science informs governance, and governance sharpens science. I would just perhaps end by highlighting Singapore’s continued commitment to contribute to advancing these discussions. We were very fortunate to host the International Scientific Exchange on AI Safety and to bring about the Singapore Consensus on Global AI Safety Research Priorities.

Joshua was in Singapore for this very momentous event. We will continue to participate in joint testing efforts of the International Network for Advanced AI Measurement, Evaluation and Science. We have organized two editions of the Singapore AI Safety Red Teaming Challenge, the first multicultural and multilingual AI safety red teaming exercise focused on the Asia -Pacific region. And as chair of the ASEAN Work Group on AI Governance, we have actively spearheaded efforts to foster a trusted environment in ASEAN by adapting global norms and best practices for ASEAN and in bringing about regional harmonization through the ASEAN Guide on AI Governance and Ethics, as well as expanding it to address the risk in generative AI. We are now working within ASEAN to explore practical tools for AI safety testing and aim to collectively develop a set of AI safety benchmarks that reflect our region’s concerns.

And finally, I’d like to welcome all colleagues to join us in Singapore for the second edition of the International Scientific Exchange, which we expect to take place on the 17th and 18th of May, and we look forward to furthering

Anil Ananthaswamy

Thank you very much once again. Thank you, Mr. Teo, for your closing remarks. This session is now concluded. Thank you very much. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“António Guterres’ leadership places science and multilateral cooperation at the heart of AI governance.”

The knowledge base records Guterres emphasizing the importance of science in global AI governance and calling for evidence-based, multilateral approaches [S20].

Confirmedhigh

“AI innovation is moving at the speed of light, outpacing our collective ability to fully understand it.”

Guterres is noted as saying technological developments are unfolding at an unprecedented speed and that AI advancement is outpacing regulation and understanding [S89] and [S94].

Confirmedhigh

“Policy must be built on trusted facts rather than hype or disinformation.”

Guterres called for replacing hype and fear with shared, evidence-based approaches to AI policy [S5].

Confirmedhigh

“The creation of an Independent International Scientific Panel on Artificial Intelligence, described as fully independent, globally diverse and multidisciplinary, to give every country a clear analytical baseline.”

The panel is identified in the knowledge base as the first global scientific body on AI, independent and multidisciplinary, intended to provide expert evidence for all nations [S92] and [S93].

Additional Contextmedium

“The UN has been “indispensable to not just the protection of people, but the preservation of our species” by helping humanity live with nuclear weapons without using them.”

The knowledge base highlights the UN’s broader indispensable role in preventing regional crises and preserving humanity, though it does not specifically mention nuclear-weapon deterrence [S90] and [S91].

Additional Contextmedium

“Rapid, uneven growth of AI capabilities creates a lag between scientific publications and policy action because studies involving people can take months while AI systems evolve week by week.”

The pacing problem between fast-moving technology and slower governance is documented in the knowledge base, underscoring the same lag described by Bengio [S47].

Confirmedhigh

“The United Nations, created just over 80 years ago, remains humanity’s greatest accomplishment and an indispensable platform for coordinating AI governance.”

The UN’s indispensable nature and its 80-year history are affirmed in the knowledge base, which describes the organization as essential for global cooperation and crisis prevention [S90] and [S30].

External Sources (99)
S1
AI Meets Agriculture Building Food Security and Climate Resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S2
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S3
(Day 1) General Debate – General Assembly, 79th session: morning session — – António Guterres, Secretary-General of the United Nations César Bernardo Arévalo de León – Guatemala : Your Excellenc…
S4
Keynote-HE Emmanuel Macron — -Antonio Guterres: Title – His Excellency (likely UN Secretary-General based on context); Role – Delivered opening addre…
S5
Keynote-António Guterres — -Moderator: Role/Title: Discussion moderator; Areas of expertise: Not mentioned -Mr. Sundar Pichai: Role/Title: Not spe…
S6
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Josephine Teo- Role/title not specified (represents Singapore)
S8
S9
Building Trusted AI at Scale – Keynote Anne Bouverot — -Anne Bouverot: Special Envoy for Artificial Intelligence, France; Diplomat and technologist; Former Director General of…
S10
How to make AI governance fit for purpose? — – Anne Bouverot- Chuen Hong Lew – Jennifer Bachus- Anne Bouverot
S11
Amandeep Singh Gill — Mr Gill holds a PhD in Nuclear Learning in Multilateral Forums from King’s College, London, a Bachelor of Technology in …
S12
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — – Amandeep Singh Gill: UN Secretary General’s envoy on technology Amandeep Singh Gill broadened the scope of potential …
S13
A Digital Future for All (morning sessions) — – Amandeep Singh Gill – UN Secretary General’s Envoy in Technology Amandeep Singh Gill: Good morning. How are we toda…
S14
Keynote-Brad Smith — -Brad Smith: Role/Title: Vice Chair and President of Microsoft; Areas of expertise: Technology policy, privacy, cybersec…
S15
Brad Smith — As Microsoft’s vice chair and president, Brad Smith leads a team of more than 1,900 business, legal and corporate affair…
S16
Microsoft Vice Chair and President Brad Smith testimony before Senate on AI — Microsoft Vice Chair and President Brad Smith testafied before a Senate Judiciary subcommittee in a hearing titled ‘Over…
S17
Transcript from the hearing — Let me introduce the witnesses and seize this moment to let you have the floor. We’re honored to be joined by Dario Amad…
S18
UN Secretary-General unveils Science and Technology Advisory Board — The United Nations Secretary-General, António Guterres, announced the creation of aScientific Advisory Boardto provide i…
S19
Driving U.S. Innovation in Artificial Intelligence — 17. Yoshua Bengio – Professor, University of Montreal
S20
Why science metters in global AI governance — -Balaraman Ravindran- Professor at IIT Madras, member of International Independent Scientific Panel
S21
Towards a Safer South Launching the Global South AI Safety Research Network — – Dr. Balaraman Ravindran- Dr. Urvashi Aneja
S22
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — – Balaraman Ravindran- Abdurrahman Habib – Balaraman Ravindran- S. Krishnan
S23
Why science metters in global AI governance — -Ajay Sood- Principal Scientific Advisor to the Government of India
S24
WS #202 The UN Cybercrime Treaty and Transnational Repression — Joey Shea: with the headphones on. We’re going to begin the session. My name is Joey Shea. I cover Saudi Arabia for …
S25
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S26
Why science metters in global AI governance — -Anil Ananthaswamy- Moderator/Host, Author of “The Elegant Math Behind Machine Learning”
S27
Artificial intelligence (AI) – UN Security Council — António Guterres, the Secretary-General, emphasized that”humanity must always retain control over decision-making functi…
S28
IGF 2024 Opening Ceremony — – António Guterres: UN Secretary General António Guterres: Excellencies, I am pleased to greet the Internet Governance …
S29
Software.gov — The interoperability of systems is maintained by establishing common standards and rules.
S30
https://dig.watch/event/india-ai-impact-summit-2026/why-science-metters-in-global-ai-governance — Because there’s not enough past evidence to be sure that a particular tipping point is going to happen. So the situation…
S31
The Dawn of Artificial General Intelligence? / DAVOS 2025 — Nicholas Thompson: Yoshua? Yoshua Bengio: All right, there are several things that Andrew said that I think are wrong…
S32
Science under siege from AI, integrity of research at risk — AI is rapidlytransformingthe landscape of scientific research, but not always for the better. A growing concern is the p…
S33
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — He emphasised the need for policy that balances principle-level guidance with practical guardrails whilst avoiding overl…
S34
Building inclusive global digital governance (CIGI) — The impact of digital technologies, AI, data management, and governance is a subject of ongoing debate, with both opport…
S35
Towards 2030 and Beyond: Accelerating the SDGs through Access to Evidence on What Works — Diversity in evidence production and sharing is crucial
S36
Session — – The need for inclusion of diverse views, not just representation
S37
Open Forum #30 High Level Review of AI Governance Including the Discussion — These key comments fundamentally shaped the discussion by introducing three critical themes that transformed it from a r…
S38
Data first in the AI era — – **Equity and Access as Core Challenges**: A central theme was ensuring equitable access to both data and the benefits …
S39
World Economic Forum Panel on Quantum Information Science and Technology — Equity and governance frameworks are crucial to ensure quantum technologies benefit all populations globally rather than…
S40
Global AI Policy Framework: International Cooperation and Historical Perspectives — The scientific panel will provide evidence-based policy assessments, whilst the global dialogue will enable multilateral…
S41
AI Safety at the Global Level Insights from Digital Ministers Of — There’s a gap between scientific reports and actionable policy guidance that could be filled with evidence-based policy …
S42
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S43
How AI Drives Innovation and Economic Growth — Kremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development dispar…
S44
Measuring Gender Digital Inequality in the Global South — One of the speakers shared the opinion that although progress is being made in terms of digital skills and education, th…
S45
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — Minister Teo offered insights from Singapore’s experience navigating AI development amid great power competition. Singap…
S46
Lightning Talk #209 Safeguarding Diverse Independent NeWS Media in Policy — ## Background and Research Context None identified beyond those in the speakers names list.
S47
Laying the foundations for AI governance — Lan Xue: Okay. I think my job is easier. I can say I agree with all of them. So I think that’s probably the easiest way….
S48
morning session — In addition to the discussions surrounding confidence-building measures and the BWC, this expanded summary also emphasiz…
S49
Table of contents — + Even though Estonia is esteemed as a digital country in the world, our attention and resources are largely directed to…
S50
Software.gov — The interoperability of systems is maintained by establishing common standards and rules.
S51
Law, Tech, Humanity, and Trust — Technical Standards and Interoperability Technical standardization is crucial for global interoperability
S52
Why science metters in global AI governance — And it helps us anticipate impacts early, from risks for children, to labor markets, to manipulation at scale. So countr…
S53
The Virtual Worlds we want: Governance of the future web | IGF 2023 Open Forum #45 — Alexandra Kozik:Thank you very much and thank you so much for inviting us to this debate. Good morning from Brussels, of…
S54
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S55
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Eltjo Poort: thank you Isadora yeah and thanks for giving me the opportunity to say a few things I there’s a little bit …
S56
AI Governance Dialogue: Steering the future of AI — Martin argues that high-level policy commitments must be accompanied by detailed technical standards to be effective. Wi…
S57
Foreword — – i. To achieve digital transformation, policy and regulation should be more holistic. Cross-sectoral collaboration alon…
S58
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Harmonization of policies across the region was identified as a critical goal to enable seamless transactions and integr…
S59
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S60
What is it about AI that we need to regulate? — Regional coordination emerged as a key middle layer between global and local approaches.Folake Olagunju articulated this…
S61
IGF 2024 Opening Ceremony — This comment provided a structure for subsequent speakers to address specific aspects of AI governance and inequality. I…
S62
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — Inclusivity is another key aspect of AI governance. It is crucial to have more inclusive conversations and ensure the pa…
S63
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Inclusion of all relevant stakeholders is seen as crucial for effective AI standards. The inclusivity of diverse perspec…
S64
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leadin…
S65
Setting the Rules_ Global AI Standards for Growth and Governance — Key areas of convergence included the importance of process-oriented standards that can adapt to evolving capabilities, …
S66
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Throughout the discussion, speakers emphasised that effective AI assurance cannot be achieved by individual organisation…
S67
The role of standards in shaping a safe and sustainable AI-driven future — He further expounded on the collaborative essence of standardisation work, which relies on mutual trust, understanding, …
S68
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existi…
S69
Open Forum #30 High Level Review of AI Governance Including the Discussion — International Cooperation and Framework Coordination The UN’s role should focus on providing independent scientific res…
S70
Why science metters in global AI governance — Global governance is needed with systems linking national bodies to ensure all voices are heard, especially from develop…
S71
UNSC meeting: Scientific developments, peace and security — China:President, China, thanks. Foreign Minister Cassius for presiding over the meeting. I listened carefully to the pre…
S72
AI Safety at the Global Level Insights from Digital Ministers Of — There’s a gap between scientific reports and actionable policy guidance that could be filled with evidence-based policy …
S73
How AI Drives Innovation and Economic Growth — Kremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development dispar…
S74
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S75
https://dig.watch/event/india-ai-impact-summit-2026/why-science-metters-in-global-ai-governance — But as I mentioned yesterday, compared to the people who lived in the Bronze Age, we’re all geniuses. We’re all geniuses…
S76
In brief — Humanitarian actors need to be aware of the different nuances of the term ‘evidence-based’, particularly w…
S77
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — The panel discussion addressed needs of the Global South, with particular focus on capacity building for women and youth
S78
AI for Good Impact Initiative — Ebtesam Almazrouei:Thank you, Fred. Your Royal Highness, Your Excellencies, esteemed guests, allow me first to extend my…
S79
Inclusive AI governance: Perspectives from the Global South — At the 2024 Internet Governance Forum (IGF) in Riyadh, the Data and AI Governance coalition convened apanelto explore th…
S80
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:So, thank you and welcome everybody to this very important session at the WSIS in this rainy weather. Today, I…
S81
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — While the panel focused heavily on Global South inclusion, an audience member challenged this narrow focus by highlighti…
S82
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — Minister Teo offered insights from Singapore’s experience navigating AI development amid great power competition. Singap…
S83
Policymaker’s Guide to International AI Safety Coordination — This observation set the analytical framework for much of the subsequent discussion. It influenced Minister Teo’s detail…
S84
Opening & Plenary segment: Summit of the Future – General Assembly, 3rd plenary meeting, 79th session — Multiple speakers including António Guterres, UN Secretary-General
S85
(Day 2) General Debate – General Assembly, 79th session: afternoon session — – Antonio Guterres: Secretary-General of the United Nations Allah Maye Halina – Chad: Madame President, Heads of State…
S86
UN: Summit of the Future Global Call — Melissa Fleming, the UN Under-Secretary-General for Global Communications, is moderating a global call ahead of the summ…
S87
AI Meets Cybersecurity Trust Governance & Global Security — “Move fast, break things.”[113]”And the motto there is move deliberately and maintain things.”[114]”How to be able to ge…
S88
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S89
IGF 2019 – Opening ceremony — United Nations Secretary-General Antonio Guterresopened his speech by drawing parallels with German Chancellor Angela Me…
S90
The 80th session of the UN General Assembly (UNGA 80) – Day 2 — Indispensable nature of the UN:Argued that in a time of extreme complexity and uncertainty, the UN is not only useful bu…
S91
vi CONTENTS — As Dag Hammarskjo ¨ld, the UN’s great second Secretary-General, put it, the United Nations was not created to take human…
S92
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — The International Scientific Panel. independente sobre inteligência artificial é o primeiro órgão científico global sobr…
S93
9821st meeting — 2. Creation of an International Scientific Panel on Artificial Intelligence Mr. President, allow me to make some recomm…
S94
UNGA/DAY 1/PART 2 — The advancement of AI is outpacing regulation and responsibility, with its control concentrated in a few hands. (UN Secr…
S95
AI and Digital Predictions for 2024 report — Discussions far from consensus.
S96
Is Geopolitical ‘Coopetition’ Possible? — Maros Sefcovic, a prominent advocate for global cooperation, emphasises the critical need to foster collaboration amidst…
S97
Open Forum: Liberating Science — In conclusion, climate change misinformation and disinformation hinder efforts to tackle the climate crisis by promoting…
S98
WS #270 Understanding digital exclusion in AI era — This highlights the tension between policy development and technological progress, particularly in countries where gover…
S99
AI for Humanity: AI based on Human Rights (WorldBank) — Stating that technology developments occur at a rapid pace implies a need for due diligence and risk assessment to keep …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
António Guterres
3 arguments110 words per minute653 words353 seconds
Argument 1
Science‑centered architecture for AI governance
EXPLANATION
Guterres argues that AI governance must be built around scientific knowledge, placing science at the core of international cooperation to ensure policies are evidence‑based rather than speculative. He stresses that a science‑led architecture will provide risk‑based guardrails that protect rights while accelerating progress.
EVIDENCE
He states that the United Nations is building a practical architecture that puts science at the centre of international cooperation on AI and that the Independent International Scientific Panel is designed to close the AI knowledge gap and provide a shared baseline for all countries, moving from blunt measures to smarter, risk-based guardrails that protect people and uphold human rights [17-20][24-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guterres stresses that AI governance must be built around scientific knowledge, replacing hype with shared evidence and calling for common safety measures and interoperability standards in his keynote and the “Why science matters” discussion [S5][S20][S29][S27].
MAJOR DISCUSSION POINT
Science as the foundation for AI governance
AGREED WITH
Yoshua Bengio, Anne Bouverot, Amandeep Singh Gill, Soumya Swaminathan, Josephine Teo
Argument 2
Panel provides a shared baseline of analysis for all nations
EXPLANATION
Guterres explains that the new Independent International Scientific Panel will deliver a common analytical foundation, enabling every country—regardless of AI capacity—to understand impacts and act with clarity. This shared baseline is meant to shift discussions from philosophical debates to technical coordination.
EVIDENCE
He describes the panel as designed to help close the AI knowledge gap, assess real impacts across economies and societies, and give countries at every level of AI capacity the same clarity, providing a shared baseline of analysis [19-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The independent International Scientific Panel is described as delivering a common analytical foundation and likened to an IPCC-style mechanism for AI in the Guterres keynote and the “Why science matters” report, reinforcing its role as a shared baseline [S5][S20][S34].
MAJOR DISCUSSION POINT
Creation and purpose of the Independent International Scientific Panel on AI
Argument 3
Common technical baselines and shared testing standards enable interoperability
EXPLANATION
Guterres contends that agreeing on common technical benchmarks for testing AI systems creates interoperability, allowing technologies to scale globally with confidence. Without such baselines, fragmented rules would raise costs and safety risks.
EVIDENCE
He notes that when we agree on how to test systems and measure risk we create interoperability, giving the example that a startup in New Delhi can scale globally because the benchmarks are shared and safety travels with the technology [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guterres calls for common safety measures, testing standards and interoperability across borders, echoing the emphasis on shared technical standards in the keynote and the software.gov description of interoperability through common rules [S5][S29].
MAJOR DISCUSSION POINT
Global cooperation, trust and avoiding fragmented regulations
AGREED WITH
Josephine Teo, Soumya Swaminathan, Amandeep Singh Gill
DISAGREED WITH
Yoshua Bengio
Y
Yoshua Bengio
3 arguments141 words per minute828 words351 seconds
Argument 1
Neutral, fact‑based synthesis to inform policy
EXPLANATION
Bengio stresses that the scientific panel should produce a neutral, fact‑based synthesis that offers a shared understanding for policymakers, insulated from societal tensions. This synthesis helps identify where scientists agree, where evidence is strong, and where uncertainties remain.
EVIDENCE
He says the role of the synthesis is to provide a shared understanding as a basis for political discussions and to be as uninfluenced by tensions as possible, highlighting the need to recognize uncertainties, points of agreement, and strong evidence [68-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bengio argues for a neutral, fact-based synthesis to guide policymakers; this position is echoed in the Guterres keynote and the “Why science matters” briefing, and is reflected in his Davos remarks on providing shared understanding [S5][S20][S31].
MAJOR DISCUSSION POINT
Science as the foundation for AI governance
AGREED WITH
António Guterres, Anne Bouverot, Amandeep Singh Gill, Soumya Swaminathan, Josephine Teo
Argument 2
AI capabilities are growing unevenly and faster than scientific publishing, creating a lag
EXPLANATION
Bengio points out that AI capabilities are advancing rapidly and unevenly, outpacing the slower cycle of scientific studies and policy responses, which creates a lag between emerging risks and regulatory action. This lag hampers timely mitigation of potential harms.
EVIDENCE
He describes rapid growth of AI capabilities across labs and companies, noting that scientific papers and studies take months, so clues of potential problems appear only after a delay, exemplified by unexpected psychological effects of chatbots that were first observed anecdotally before scientific studies began [81-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bengio highlights the rapid, uneven advance of AI outpacing scientific studies, a concern also noted in Davos discussions and the AI Policy Research Roadmap which stresses the policy-research lag [S31][S33].
MAJOR DISCUSSION POINT
Rapid AI advancement and the policy lag
DISAGREED WITH
Josephine Teo
Argument 3
Emphasise high‑level, principle‑based guardrails that survive technical change
EXPLANATION
Bengio argues that policy should focus on high‑level principles that remain applicable despite fast‑changing technical details, rather than trying to codify every specific technology. Such principles can guide the development of guardrails that are robust over time.
EVIDENCE
He suggests thinking about high-level principles that can be applied without delving into details because the details will change, and stresses the need for technology that implements those guardrails in the field [85-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bengio recommends high-level principle guardrails that remain relevant despite technical shifts; this aligns with his Davos statements and the AI policy roadmap that calls for principle-level guidance balanced with practical guardrails [S31][S33].
MAJOR DISCUSSION POINT
Rapid AI advancement and the policy lag
DISAGREED WITH
António Guterres
S
Soumya Swaminathan
4 arguments180 words per minute451 words149 seconds
Argument 1
Policy must adapt to the best available evidence, not wait for certainty
EXPLANATION
Swaminathan asserts that policy cannot wait for absolute certainty; it must be based on the best current evidence and remain flexible to adapt as new data emerges. This mirrors the rapid evidence turnover experienced during the COVID‑19 response.
EVIDENCE
She explains that during COVID-19 they reviewed hundreds of publications daily to make recommendations, and that policy must change, ask for relevant evidence, and adapt when that evidence becomes clear [217-220].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Swaminathan draws on the COVID-19 experience of daily evidence review to argue for evidence-based, adaptable policy, a point reiterated in the “Why science matters” discussion [S20][S33].
MAJOR DISCUSSION POINT
Science as the foundation for AI governance
AGREED WITH
António Guterres, Yoshua Bengio, Anne Bouverot, Amandeep Singh Gill, Josephine Teo
Argument 2
Panel functions like an IPCC for AI, linking science to policy worldwide
EXPLANATION
Swaminathan likens the new UN scientific panel to the IPCC, suggesting it should serve as a global mechanism that aggregates scientific findings and connects them to policy decisions across nations, facilitating coordinated responses to AI challenges.
EVIDENCE
She states that the UN body is similar to the IPCC and should establish systems that link to national bodies, ensuring all voices are heard and that policy is informed by science [209-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She likens the new UN scientific panel to the IPCC, a comparison supported by the Building Inclusive Global Digital Governance report and the “Why science matters” brief that stress an IPCC-style global mechanism for AI [S34][S20].
MAJOR DISCUSSION POINT
Creation and purpose of the Independent International Scientific Panel on AI
AGREED WITH
António Guterres, Josephine Teo, Amandeep Singh Gill
DISAGREED WITH
Balaraman Ravindran
Argument 3
Inclusion of diverse, especially low‑income, voices is critical for trustworthy evidence
EXPLANATION
Swaminathan emphasizes that for scientific evidence to be trusted and actionable, it must incorporate perspectives from low‑income populations and diverse stakeholders, ensuring recommendations are relevant globally. She cites past criticism of WHO recommendations that were not suitable for low‑income contexts.
EVIDENCE
She notes that during COVID-19 some recommendations were relevant only to high-income countries, highlighting the need to include voices of women, low-income women, and remote farmers to make AI work for everyone [213-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for diversity in evidence production and inclusion of low-income perspectives are documented in inclusive governance literature and equity-focused studies, highlighting the need for broad stakeholder input [S35][S36][S37][S38].
MAJOR DISCUSSION POINT
Global cooperation, trust and avoiding fragmented regulations
AGREED WITH
Balaraman Ravindran, Yoshua Bengio, Josephine Teo
Argument 4
Equity must be central to AI governance, ensuring benefits for all populations
EXPLANATION
Swaminathan stresses that equity should be at the heart of AI governance, guaranteeing that AI benefits are distributed fairly and that marginalized groups are not left behind. She calls for the panel to network scientists across sectors to address safety, risks, and equity.
EVIDENCE
She mentions that the panel could help network scientists sectorally, look at emerging evidence, set priorities, and ensure equity is central to AI as a public good [315-317].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Equity and access as core challenges are emphasized in data-first equity analyses and the World Economic Forum panel on equitable technology governance, reinforcing her argument for equity-centered AI policy [S38][S39].
MAJOR DISCUSSION POINT
Capacity building and equitable AI deployment
AGREED WITH
Balaraman Ravindran, Yoshua Bengio, Josephine Teo
A
Anne Bouverot
2 arguments142 words per minute501 words211 seconds
Argument 1
Understanding reduces fear and enables informed decisions
EXPLANATION
Bouverot argues that fear stems from lack of understanding, and that increasing scientific comprehension of AI reduces anxiety and supports rational policy choices. She cites Marie Curie’s famous quote to illustrate this point.
EVIDENCE
She quotes Marie Curie saying “nothing in life is to be feared, everything is to be understood” and stresses that understanding is the first step before moving to policymakers and citizens [250-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to replace hype and fear with shared scientific understanding is highlighted in the Guterres keynote and the “Why science matters” briefing, supporting Bouverot’s claim that understanding mitigates fear [S5][S20].
MAJOR DISCUSSION POINT
Science as the foundation for AI governance
AGREED WITH
António Guterres, Brad Smith, Yoshua Bengio
Argument 2
Multidisciplinary, UN‑backed panel essential for credible advice
EXPLANATION
Bouverot highlights that a multidisciplinary panel anchored in the UN provides credible, globally accepted scientific advice for AI governance. She points to France’s support and the nomination of a scientist to the panel as evidence of its importance.
EVIDENCE
She notes that France is fully supportive of the scientific panel, proud of Joëlle Barral as a nominee, and stresses that multidisciplinary, UN-backed panels are essential for credible advice [260-264].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of a multidisciplinary, UN-anchored scientific panel for credible global advice is underscored in the Guterres keynote and the Building Inclusive Global Digital Governance report [S5][S34].
MAJOR DISCUSSION POINT
Creation and purpose of the Independent International Scientific Panel on AI
A
Amandeep Singh Gill
1 argument136 words per minute644 words283 seconds
Argument 1
The “science‑evidence‑policy” loop drives effective governance
EXPLANATION
Gill describes a feedback loop where scientific evidence informs policy, and policy questions shape further scientific inquiry, creating a dynamic cycle that enhances AI governance. He frames this loop as central to the discussion of the new panel.
EVIDENCE
He states there is a loop between science and evidence, and evidence and policy, and that they want to explore that loop today in the context of the International Independent Scientific Panel [201-202].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The “science-evidence-policy” feedback loop is described in the “Why science matters” discussion as central to the new panel’s work, illustrating how scientific evidence informs policy and vice-versa [S20].
MAJOR DISCUSSION POINT
Science as the foundation for AI governance
B
Brad Smith
3 arguments132 words per minute1339 words606 seconds
Argument 1
Hype and grandiose predictions hinder realistic governance; focus on current facts
EXPLANATION
Smith criticizes the tendency to make bold, unverified predictions about AI, arguing that such hype distracts from grounded, fact‑based governance. He advocates focusing on present evidence rather than speculative crystal‑ball forecasts.
EVIDENCE
He recounts listening to predictions that scored an average of 25% accuracy, stating there is no crystal ball, and emphasizes using current understanding each year rather than grandiose forecasts [152-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Smith’s criticism of hype aligns with Guterres’ call to replace hype with evidence and with observations that past AI predictions lack empirical grounding, as noted in the keynote and a source on the lack of historical evidence for AI tipping points [S5][S30].
MAJOR DISCUSSION POINT
Rapid AI advancement and the policy lag
AGREED WITH
António Guterres, Yoshua Bengio, Anne Bouverot
Argument 2
The United Nations remains the indispensable platform for coordinated global action
EXPLANATION
Smith asserts that the UN is essential for global cooperation, preventing fragmentation and providing a framework where nations can collectively address AI challenges. He cites the UN’s historical role in averting nuclear catastrophe and its ongoing relevance.
EVIDENCE
He mentions that the UN has been indispensable for protecting people and preserving our species, referencing its role in managing nuclear weapons and being part of solutions worldwide [112-119][126-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Smith’s assertion is supported by the Guterres keynote emphasizing the UN’s historic role in global coordination and the UN Security Council AI meeting that highlights the UN’s centrality in AI governance [S5][S27].
MAJOR DISCUSSION POINT
Global cooperation, trust and avoiding fragmented regulations
Argument 3
AI should augment human capability rather than replace it; focus on practical benefits
EXPLANATION
Smith argues that AI’s value lies in enhancing human abilities and solving problems, not merely in creating smarter machines. He stresses using AI to make people smarter and to address societal needs.
EVIDENCE
He says the important question is not whether machines will be smarter than humans, but how we will use them to make people smarter and help us do what we need to do [174-177].
MAJOR DISCUSSION POINT
Operationalising AI principles and building standards
B
Balaraman Ravindran
2 arguments169 words per minute483 words171 seconds
Argument 1
Lack of local benchmarks hampers policy decisions in education and agriculture
EXPLANATION
Ravindran points out that India lacks domestic evidence and benchmarks to assess AI’s impact on education and agriculture, making it difficult to craft effective policies. He calls for data on AI’s effects on youth, children, and rural versus urban contexts.
EVIDENCE
He describes uncertainty about AI’s impact on the social fabric, children, youth, and agriculture, noting the absence of Indian benchmarks and examples of AI bots for farmers, and mentions preliminary studies on AI in education with unclear causal relationships [225-240].
MAJOR DISCUSSION POINT
Rapid AI advancement and the policy lag
DISAGREED WITH
Soumya Swaminathan
Argument 2
Research on AI’s impact on youth, education, and agriculture is needed to guide equitable policies
EXPLANATION
Building on the previous point, Ravindran stresses the need for systematic research to generate evidence on AI’s effects on youth, learning behavior, and agricultural efficiency, which would inform equitable policy decisions. He highlights the importance of understanding habit‑driven AI adoption in schools.
EVIDENCE
He mentions preliminary studies showing AI adoption in education is linked to habit, but the causal direction is unclear, and calls for more evidence to evaluate AI’s effectiveness in agriculture and education [229-240].
MAJOR DISCUSSION POINT
Capacity building and equitable AI deployment
AGREED WITH
Soumya Swaminathan, Yoshua Bengio, Josephine Teo
A
Ajay Sood
1 argument138 words per minute304 words131 seconds
Argument 1
Embedding governance through “techno‑legal” design integrates risk management into systems
EXPLANATION
Sood proposes a “techno‑legal” approach where governance mechanisms are built directly into technical designs, allowing AI systems to incorporate risk mitigation at the architectural level. This method mirrors India’s experience with digital public infrastructure.
EVIDENCE
He explains that governance was embedded through technical design in digital public infrastructure, calling it “techno-legal” and suggesting it as a way to handle AI risks [296-298].
MAJOR DISCUSSION POINT
Operationalising AI principles and building standards
J
Josephine Teo
5 arguments140 words per minute901 words385 seconds
Argument 1
Singapore’s commitment to the panel and its role in global AI safety
EXPLANATION
Teo affirms Singapore’s support for the Independent International Scientific Panel and its active participation in global AI safety initiatives, positioning Singapore as a proactive small‑state contributor. She highlights hosting events and collaborating on safety testing.
EVIDENCE
She welcomes the establishment of the panel, notes Singapore’s role in the International Scientific Exchange on AI Safety, the Singapore Consensus, and participation in joint testing efforts and red-team challenges, and mentions the ASEAN work on AI safety benchmarks [335-352].
MAJOR DISCUSSION POINT
Creation and purpose of the Independent International Scientific Panel on AI
AGREED WITH
António Guterres, Yoshua Bengio, Anne Bouverot, Amandeep Singh Gill, Soumya Swaminathan
Argument 2
Need for standardized evaluation methodologies that work across regulatory contexts
EXPLANATION
Teo calls for common evaluation methods that can be applied internationally, enabling consistent assessment of AI systems despite differing regulatory environments. Standardization is essential for interoperability and trust.
EVIDENCE
She states the need for standardized evaluation methodologies that work across different regulatory contexts and mentions capacity building for all countries to engage with technical challenges [340-342].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for common evaluation methods and interoperable standards mirrors Guterres’ emphasis on shared safety measures and the software.gov description of interoperability through common standards [S5][S29].
MAJOR DISCUSSION POINT
Operationalising AI principles and building standards
AGREED WITH
António Guterres, Soumya Swaminathan, Amandeep Singh Gill
Argument 3
Investment in AI R&D and safety institutes creates the scientific base for standards
EXPLANATION
Teo emphasizes that sustained investment in AI research and dedicated safety institutes provides the scientific foundation needed to develop robust standards and guidelines. Singapore has allocated a billion‑dollar AI R&D plan and established a digital trust centre.
EVIDENCE
She notes Singapore set aside a billion dollars in a national AI R&D plan for foundational and applied research into responsible AI, and mentions a designated AI safety institute and a centre for advanced technologies in online safety [320-322].
MAJOR DISCUSSION POINT
Operationalising AI principles and building standards
AGREED WITH
António Guterres
Argument 4
Capacity‑building programmes ensure all countries can engage with technical challenges
EXPLANATION
Teo argues that integrating science and policy, and fostering international cooperation, helps build capacity in all nations to understand and regulate AI, preventing fragmentation. She stresses the importance of collaborative approaches for global interoperability.
EVIDENCE
She says both impulses of speed and caution are necessary, and that integration of science and policy, plus international cooperation, can develop sound interoperable approaches, highlighting capacity building as essential [323-326].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building as a means to enable worldwide participation in AI governance is highlighted in the “Why science matters” briefing and the inclusive governance report that stresses building scientific capacity across nations [S20][S34].
MAJOR DISCUSSION POINT
Capacity building and equitable AI deployment
Argument 5
ASEAN and Singapore initiatives illustrate regional harmonisation efforts
EXPLANATION
Teo highlights ASEAN’s work on AI governance, including guides and red‑team challenges, as examples of regional harmonisation that complement global UN efforts. These initiatives aim to align standards across the region.
EVIDENCE
She describes ASEAN’s AI Governance Guide, efforts to adapt global norms, the Singapore AI Safety Red Teaming Challenge, and work to develop regional AI safety benchmarks, as well as Singapore’s ongoing participation in joint testing and capacity-building activities [327-334][347-353].
MAJOR DISCUSSION POINT
Global cooperation, trust and avoiding fragmented regulations
Agreements
Agreement Points
Science should be central to AI governance, providing a shared evidence‑based foundation for policy.
Speakers: António Guterres, Yoshua Bengio, Anne Bouverot, Amandeep Singh Gill, Soumya Swaminathan, Josephine Teo
Science‑centered architecture for AI governance Neutral, fact‑based synthesis to inform policy Understanding reduces fear and enables informed decisions The ‘science‑evidence‑policy’ loop drives effective governance Policy must adapt to the best available evidence, not wait for certainty Singapore’s commitment to the panel and its role in global AI safety
All speakers stress that AI governance must be built on scientific knowledge and evidence, with panels and loops that translate neutral, fact-based synthesis into policy, and that understanding reduces fear and builds trust [17-20][24-30][68-74][250-259][201-202][217-220][335-340].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on science aligns with calls for evidence-based AI governance highlighted in recent policy briefs, which argue that scientific assessment enables early risk anticipation and informs international cooperation [S52] and is reflected in the AI Policy Research Roadmap advocating systematic evidence gathering [S55].
Establishing common technical baselines and shared testing standards is essential for interoperability and coordinated global action.
Speakers: António Guterres, Josephine Teo, Soumya Swaminathan, Amandeep Singh Gill
Common technical baselines and shared testing standards enable interoperability Need for standardized evaluation methodologies that work across regulatory contexts Panel functions like an IPCC for AI, linking science to policy worldwide The ‘science‑evidence‑policy’ loop drives effective governance
Speakers agree that shared benchmarks, standardized evaluation methods and an IPCC-style panel create interoperable frameworks that allow AI systems to scale safely across borders [41-44][340-342][209-212][201-202].
POLICY CONTEXT (KNOWLEDGE BASE)
Technical baselines and testing standards are core to interoperability frameworks such as the Software.gov guidance on common standards [S50] and are echoed in IGF discussions on global AI standards that stress shared testing protocols for coordinated action [S51][S65].
Inclusion of diverse, especially low‑income and regional, voices is critical to produce trustworthy evidence and ensure equitable AI outcomes.
Speakers: Soumya Swaminathan, Balaraman Ravindran, Yoshua Bengio, Josephine Teo
Inclusion of diverse, especially low‑income, voices is critical for trustworthy evidence Research on AI’s impact on youth, education, and agriculture is needed to guide equitable policies AI will affect developing countries … need multidisciplinary array so everyone at the table Equity must be central to AI governance, ensuring benefits for all populations
All four speakers highlight the need to incorporate perspectives from low-income groups, regional contexts and developing countries to build credible evidence and equitable policies [213-218][225-240][92-94][337-340].
POLICY CONTEXT (KNOWLEDGE BASE)
Inclusive governance is a pillar of UNCTAD’s AI equity agenda and of multistakeholder standard-setting processes that stress participation from developing countries and under-represented groups [S62][S63][S64].
Reduce hype and focus on concrete, evidence‑based facts to guide realistic AI governance.
Speakers: António Guterres, Brad Smith, Yoshua Bengio, Anne Bouverot
Less noise, more knowledge Hype and grandiose predictions hinder realistic governance; focus on current facts policy cannot be built on guesswork Understanding reduces fear and enables informed decisions
The speakers concur that AI policy should be grounded in solid evidence rather than speculative predictions, emphasizing factual knowledge and public understanding [15-17][52-53][152-166][68-74][250-259].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls to curb hype and prioritize factual evidence appear in analyses of AI’s rapid development versus policy lag, urging evidence-based framing to avoid speculative regulation [S52][S54].
Sustained investment in AI research, safety institutes and capacity‑building programmes underpins robust standards and global cooperation.
Speakers: Josephine Teo, António Guterres
Investment in AI R&D and safety institutes creates the scientific base for standards Science‑led governance … accelerator for solutions
Both speakers underline that dedicated funding for AI research and safety, together with capacity-building, provides the scientific foundation needed for effective standards and international collaboration [320-322][24-30].
POLICY CONTEXT (KNOWLEDGE BASE)
Investment in research and safety institutes is highlighted in the AI Policy Research Roadmap as essential for building capacity and trustworthy standards, and collaborative safety monitoring initiatives stress the need for sustained funding [S55][S66].
Similar Viewpoints
Both emphasize the United Nations as the essential, irreplaceable platform for coordinating global AI governance and fostering scientific cooperation [17-20][112-119][126-129].
Speakers: António Guterres, Brad Smith
Science‑centered architecture for AI governance The United Nations remains the indispensable platform for coordinated global action
Both note that rapid AI advances outpace research and prediction, leading to a lag that makes hype‑driven forecasts unreliable and underscores the need for evidence‑based approaches [81-84][152-166].
Speakers: Yoshua Bengio, Brad Smith
AI capabilities are growing unevenly and faster than scientific publishing, creating a lag Hype and grandiose predictions hinder realistic governance; focus on current facts
Unexpected Consensus
Regional harmonisation and benchmark development for AI applications
Speakers: Balaraman Ravindran, Josephine Teo
Research on AI’s impact on youth, education, and agriculture is needed to guide equitable policies ASEAN and Singapore initiatives illustrate regional harmonisation efforts
Despite representing different regions (India and Singapore), both speakers stress the need for locally-generated evidence and regional coordination (e.g., benchmarks in education/agriculture and ASEAN harmonisation) to inform policy, an alignment not obvious from their distinct national contexts [225-240][327-334].
POLICY CONTEXT (KNOWLEDGE BASE)
Regional harmonisation is identified as a key step between global and national policies, with IGF and digital cooperation forums recommending benchmark development at the regional level to enable seamless integration [S58][S60][S65].
Overall Assessment

The discussion shows strong consensus that science must be at the heart of AI governance, that common technical standards and shared baselines are vital for interoperability, that inclusive and equitable evidence‑generation is essential, and that hype should be replaced by factual, evidence‑based policy. There is also agreement on the need for sustained investment and capacity building to support these goals.

High – The convergence across UN leadership, academia, industry and regional representatives indicates a solid foundation for coordinated, science‑driven AI governance, increasing the likelihood of effective global policy frameworks.

Differences
Different Viewpoints
Approach to policy design – high‑level principle‑based guardrails versus detailed technical baselines and shared testing standards
Speakers: Yoshua Bengio, António Guterres
Emphasise high‑level, principle‑based guardrails that survive technical change Common technical baselines and shared testing standards enable interoperability
Bengio argues that policy should focus on high-level principles that remain applicable despite rapid technical change, rather than trying to codify every detail (​[85-86]​). Guterres, by contrast, stresses the need for common technical benchmarks for testing AI systems to create interoperability and allow technologies to scale globally (​[41-44]​). Both seek effective AI governance but propose different routes.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between principle-based guardrails and detailed technical standards is discussed in policy briefs that argue high-level commitments must be paired with concrete standards to be operationally effective [S56][S65].
Handling the speed of AI advancement – accepting a lag between research and policy versus trying to balance speed with caution through integrated science‑policy processes
Speakers: Yoshua Bengio, Josephine Teo
AI capabilities are growing unevenly and faster than scientific publishing, creating a lag Need to balance rapid AI development with careful, evidence‑based policy through integration of science and policy
Bengio points out that AI advances faster than scientific studies can keep up, creating a lag that hampers timely regulation (​[81-84]​). Teo acknowledges a similar tension, noting that moving quickly and moving carefully are both necessary and must be balanced via science-policy integration (​[323-326]​). While they share the concern, Bengio emphasizes the inevitability of lag, whereas Teo stresses a proactive balancing act.
POLICY CONTEXT (KNOWLEDGE BASE)
The pacing problem between fast-moving AI research and slower policy processes has been highlighted in recent sessions, noting the need for integrated science-policy mechanisms to reduce the lag [S47][S54].
Source of evidence for policy – reliance on a global IPCC‑style scientific panel versus the need for locally generated benchmarks and data
Speakers: Soumya Swaminathan, Balaraman Ravindran
Panel functions like an IPCC for AI, linking science to policy worldwide Lack of local benchmarks hampers policy decisions in education and agriculture
Swaminathan likens the new UN scientific panel to the IPCC, arguing it should aggregate global scientific findings to inform policy (​[209-212]​). Ravindran stresses that India lacks domestic evidence and benchmarks to assess AI’s impact on education and agriculture, making policy formulation difficult (​[225-240]​). The disagreement lies in the emphasis on global versus local evidence generation.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on global versus local evidence sources reference proposals for an IPCC-style AI panel alongside calls for region-specific benchmarks to capture contextual nuances, as seen in discussions on inclusive evidence generation [S52][S58].
Unexpected Differences
None identified
Speakers:
The discussion was largely collaborative, with speakers building on each other’s points rather than presenting starkly opposing views. No surprise conflicts emerged beyond the nuanced differences noted above.
Overall Assessment

The speakers largely converged on the need for science‑based, evidence‑driven AI governance, the importance of international cooperation, and the role of the UN‑backed scientific panel. The main points of contention concerned the preferred mechanism for translating science into policy – high‑level principles versus detailed technical standards – and the balance between global versus local evidence generation.

Low to moderate. While there are nuanced differences in implementation strategies, there is broad consensus on overarching goals. This suggests that future work on the Independent International Scientific Panel can progress with relatively smooth coordination, though careful attention will be needed to reconcile principle‑based approaches with concrete technical standards and to integrate both global and local evidence streams.

Partial Agreements
Both agree that AI governance must be grounded in scientific evidence. Guterres calls for a science‑led architecture that provides a shared baseline for all nations (​[17-20, 24-30]​), while Swaminathan stresses that policy should be based on the best current evidence and remain adaptable (​[217-220]​). They differ in emphasis: Guterres focuses on building a global scientific infrastructure, whereas Swaminathan highlights the need for rapid evidence turnover and flexibility.
Speakers: António Guterres, Soumya Swaminathan
Science‑centered architecture for AI governance Policy must adapt to the best available evidence, not wait for certainty
Both advocate for standardized, interoperable technical frameworks. Guterres argues that agreeing on how to test systems creates interoperability (​[41-44]​), while Teo calls for standardized evaluation methods that function across different regulatory regimes (​[340-342]​). Their goals align, but Guterres emphasizes global benchmarks, whereas Teo stresses methodological standardisation coupled with capacity‑building.
Speakers: António Guterres, Josephine Teo
Common technical baselines and shared testing standards enable interoperability Need for standardized evaluation methodologies that work across regulatory contexts
Takeaways
Key takeaways
Science must be the foundation of AI governance; neutral, fact‑based synthesis is needed to inform policy. The United Nations is establishing an Independent International Scientific Panel on AI to provide a shared, multidisciplinary baseline for all nations, especially giving a voice to the Global South. AI capabilities are advancing faster than scientific publishing and policy processes, creating a lag that must be addressed with high‑level, principle‑based guardrails. Fragmented national regulations risk higher costs and reduced safety; common technical standards and interoperable testing frameworks are essential. Equitable and inclusive evidence—incorporating voices from low‑income countries, women, youth, and diverse sectors—is critical for trustworthy governance. Operationalising AI principles requires standardized evaluation methods, capacity‑building, and embedding governance into technology (“techno‑legal” design). Industry (e.g., Microsoft) and small states (e.g., Singapore) are committing resources to AI R&D, safety institutes, and regional harmonisation efforts.
Resolutions and action items
The UN panel will fast‑track a first report ahead of the Global AI Governance Summit in July. Member states are invited to adopt the panel’s shared baseline of analysis for technical coordination and risk‑based guardrails. Singapore will host the second edition of the International Scientific Exchange on AI Safety (May 17‑18) and continue its AI safety red‑team challenges and ASEAN harmonisation work. Microsoft pledged to devote energy and resources to support the UN‑led scientific panel and related governance initiatives. India’s National AI Governance Framework will pursue public‑private partnerships to build compute capacity and embed techno‑legal safeguards. Panel members (including Yoshua Bengio, Balaraman Ravindran, Anne Bouverot, etc.) will work to develop multidisciplinary evidence streams for health, education, agriculture, and youth impacts.
Unresolved issues
How to create rapid, reliable scientific benchmarks that keep pace with fast‑moving AI capabilities. Specific methodologies for measuring AI impact on education, agriculture, and youth in diverse contexts, especially in the Global South. Concrete mechanisms for translating high‑level AI principles into standardized, cross‑jurisdictional evaluation protocols. Ways to ensure continuous inclusion of under‑represented voices (e.g., low‑income women, remote farmers) in the evidence‑generation process. The extent and timing of policy interventions when scientific certainty is low but potential risks are high (e.g., tipping‑point analogies). Funding models and resource allocation for sustained global scientific collaboration beyond initial UN panel activities.
Suggested compromises
Adopt high‑level, principle‑based guardrails that remain applicable despite technical change, rather than detailed prescriptive rules. Balance rapid AI development with careful, evidence‑driven policy by integrating science continuously into the policy cycle. Combine “techno‑legal” design (embedding governance into system architecture) with flexible regulatory frameworks to allow adaptation. Use the UN’s legitimacy to create interoperable standards while allowing national contexts to tailor implementation. Treat scientific input as a foundation for durable governance rather than a constraint on policy flexibility, enabling iterative refinement.
Thought Provoking Comments
Science is a universal language. When we agree on how to test systems and measure risk, we create interoperability, allowing a startup in New Delhi to scale globally with confidence because the benchmarks are shared.
Highlights the foundational role of shared scientific standards in overcoming fragmentation and building trust across borders, framing science as the bridge between diverse policy regimes.
Set the agenda for the whole session, prompting subsequent speakers to discuss how to create common baselines, and influencing the panelists to stress the need for neutral, globally accepted metrics.
Speaker: António Guterres
The situation is similar to climate tipping points: we lack past evidence to be sure a particular tipping point will happen, yet the potential severity is catastrophic. We must recognize uncertainty, identify where evidence is strong, and act on high‑severity risks even without proof.
Provides a powerful analogy that clarifies why precautionary governance is needed despite scientific uncertainty, linking AI risk assessment to well‑understood climate policy frameworks.
Shifted the conversation from abstract optimism to concrete risk‑management, leading the panel to explore how to identify and prioritize uncertain but high‑impact AI risks.
Speaker: Yoshua Bengio
There is a well‑known economic theory that humanity repeats its great economic mistakes every 80 years because each generation forgets the previous crises. The United Nations, created just over 80 years ago, is one of humanity’s greatest successes and must be reinvested in.
Frames the UN’s relevance historically, using a cyclical view of economic memory to argue for institutional continuity in the face of rapid technological change.
Re‑centered the dialogue on the strategic importance of multilateral institutions, prompting other speakers (e.g., Josephine Teo) to emphasize UN legitimacy and inclusiveness.
Speaker: Brad Smith
I used Microsoft Copilot to grade AI predictions from industry leaders; the average accuracy was 25%. There is no crystal ball. We have the ability to understand where we are today, not where we will be a decade from now.
Critiques the culture of hype and over‑promising in AI, grounding the discussion in empirical performance and urging humility.
Triggered a tone shift toward skepticism of grandiose forecasts, encouraging panelists like Bengio and Swaminathan to stress evidence‑based policy rather than speculative visions.
Speaker: Brad Smith
People disagree because they don’t have a common understanding of the problem. We rush to debate solutions without first agreeing on the problem’s context.
Identifies a fundamental communication breakdown that hampers effective governance, suggesting a procedural remedy—shared problem definition.
Guided the moderator to frame the rapid‑fire round around “loops” between science and policy, and inspired panelists to discuss how to build shared contextual understanding.
Speaker: Brad Smith
During COVID we reviewed hundreds of papers daily to make rapid recommendations. AI is similar; we need a global body like the IPCC to provide fast, trustworthy evidence that can be adapted to different country contexts.
Draws a direct parallel between pandemic response and AI governance, illustrating how rapid evidence synthesis can inform timely policy while acknowledging contextual differences.
Prompted the panel to consider mechanisms for fast evidence aggregation and highlighted the need for inclusivity of low‑income perspectives, influencing later remarks on equity.
Speaker: Soumya Swaminathan
If economists predict 80 % of jobs will be transformed, policy should focus on training and reskilling; if they predict half the jobs will disappear, policy should consider universal basic income. The underlying scientific forecast determines the policy response.
Shows how divergent scientific predictions lead to vastly different policy pathways, underscoring the importance of accurate forecasting for social policy design.
Added nuance to the discussion on AI’s labor impact, prompting participants to think about scenario‑based policy planning rather than one‑size‑fits‑all solutions.
Speaker: Anne Bouverot
We embed governance through technical design – a ‘techno‑legal’ approach – as we did with India’s digital public infrastructure for identity and finance. This can be a model for AI safety.
Introduces a concrete, implementation‑focused strategy that blends law and technology, moving the conversation from abstract principles to actionable design patterns.
Shifted the dialogue toward practical engineering solutions, influencing later remarks about standardised evaluation methodologies and benchmarking.
Speaker: Ajay Sood
Balancing the impulse to move quickly with the need to move carefully is not impossible; it requires integration of science and policy, and international cooperation to develop interoperable approaches.
Synthesises the central tension of the whole session—speed versus safety—and positions the UN as the facilitator of interoperable, science‑driven governance.
Served as a concluding synthesis that reinforced earlier points about shared baselines, inclusivity, and the UN’s unique legitimacy, tying together the diverse strands of the discussion.
Speaker: Josephine Teo
AI has strong potential for helping science, as seen with recent Nobel‑winning work in physics and chemistry, but this requires open, globally funded databases of scientific data.
Extends the conversation beyond governance to the positive feedback loop where AI accelerates scientific discovery, emphasizing infrastructure needs for that synergy.
Opened a brief but significant side‑track on the benefits of AI for scientific research, reinforcing the panel’s earlier call for multidisciplinary collaboration.
Speaker: Anne Bouverot
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that repeatedly returned to the need for shared scientific baselines, humility in the face of uncertainty, and concrete mechanisms for translating evidence into policy. Guterres’ framing of science as a universal language set the stage, while Bengio’s climate‑tipping‑point analogy and Brad Smith’s critique of hype sharpened the focus on precautionary, evidence‑based governance. Contributions from Swaminathan and Bouverot linked these ideas to real‑world crises and labor policy, respectively, and Sood’s ‘techno‑legal’ proposal offered a tangible design pathway. Josephine Teo’s closing synthesis tied the threads together, reaffirming the UN’s role as the integrator of speed, safety, and inclusivity. Collectively, these comments redirected the conversation from lofty aspirations to actionable, interdisciplinary strategies, shaping a nuanced, forward‑looking consensus on how science can effectively inform global AI governance.

Follow-up Questions
What is the evidence on how AI affects children, youth, and social fabric in India, including issues like isolation and mental health?
Ravindran highlighted a lack of data on AI’s societal impacts in the Global South, especially on vulnerable groups, indicating a need for targeted research.
Speaker: Balaraman Ravindran
What benchmarks and evaluation methods can assess the efficiency and effectiveness of AI applications in Indian agriculture, such as AI co-pilots for farmers?
He asked for concrete evidence and metrics to gauge AI’s contribution to agricultural productivity, revealing a gap in measurable standards.
Speaker: Balaraman Ravindran
What is the causal relationship between AI usage and student learning outcomes in education—does AI use improve learning, or do better learners use AI more?
Ravindran noted uncertainty about directionality, calling for rigorous studies to untangle cause and effect.
Speaker: Balaraman Ravindran
How can globally accessible, publicly funded scientific data repositories be created to enable AI‑driven scientific discovery?
She emphasized the need for worldwide databases built by scientists and supported by governments to unlock AI’s potential in research.
Speaker: Anne Bouverot
What standardized, interoperable AI safety evaluation methodologies can be developed to work across different regulatory contexts?
She identified the lack of common evaluation tools as a barrier to operationalizing high‑level AI principles globally.
Speaker: Josephine Teo
What capacity‑building programs are needed so all countries, especially low‑resource ones, can meaningfully engage with technical AI challenges?
She pointed out disparities in technical expertise and the necessity of support mechanisms for inclusive participation.
Speaker: Josephine Teo
How can the voices of women, low‑income populations, and remote farmers be systematically incorporated into AI policy evidence and recommendations?
Drawing on WHO experience, she stressed the importance of inclusive evidence that reflects diverse contexts.
Speaker: Soumya Swaminathan
What processes and feedback loops are required to translate complex scientific AI findings into language and formats usable by policymakers?
He highlighted the communication gap between scientists and decision‑makers and the need for iterative, interdisciplinary interfaces.
Speaker: Yoshua Bengio
What high‑level, technology‑agnostic principles can guide AI governance to remain effective despite rapid technical change?
He suggested focusing on broad principles rather than detailed rules to keep pace with AI’s fast evolution.
Speaker: Yoshua Bengio
How can forecasting of AI developments be improved and made accountable, given the poor track record of past predictions?
He criticized inaccurate future forecasts and implied the need for better predictive methodologies and accountability mechanisms.
Speaker: Brad Smith
What concrete evidence links AI interventions to progress on Sustainable Development Goals, and what examples can illustrate this link?
When asked for SDG impact examples, he lacked ready data, indicating a research gap in measuring AI’s contribution to SDGs.
Speaker: Balaraman Ravindran
How can a shared global baseline for AI testing and risk measurement be established to ensure interoperability and avoid fragmented regulations?
He advocated for common technical standards to enable consistent safety and trust across jurisdictions.
Speaker: António Guterres
What multidisciplinary research is needed to anticipate AI’s impacts on developing countries and ensure equitable outcomes?
He expressed concern about the Global South and called for cross‑disciplinary studies to forecast and mitigate risks.
Speaker: Yoshua Bengio

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.