Why science metters in global AI governance
20 Feb 2026 18:00h - 19:00h
Why science metters in global AI governance
Summary
The session opened with Anil Ananthaswamy stating that effective governance requires understanding, and the UN Secretary-General highlighted the urgency of grounding AI policy in science [1-2][5-15]. Guterres announced the creation of an Independent International Scientific Panel on AI, describing it as independent, globally diverse and multidisciplinary, intended to provide a shared baseline of analysis for all countries [17-22][24-26]. He argued that science-led guardrails can protect human rights while accelerating innovation, and that a universal scientific language can align technical standards and reduce fragmented rule-making [27-33][38-43]. Emphasising human oversight, he said policy must be evidence-based, with clear accountability so decisions are not outsourced to algorithms [45-49].
In the fireside chat, Yoshua Bengio noted that AI scientists often disagree on future risks, making it essential to identify where evidence is strong and where uncertainty remains, similar to climate tipping-point debates [70-78]. He stressed that rapid AI advances create a lag between scientific findings and policy action, requiring neutral, accessible evaluations for policymakers [81-84]. Soumya Swaminathan compared the AI challenge to the COVID-19 response, urging rapid, globally coordinated evidence mechanisms and inclusive systems that reflect diverse contexts, especially in low-income settings [206-214][217-220]. Balaraman Ravindran highlighted the lack of data on AI’s social impacts in the Global South, citing education and agriculture as areas where evidence on effectiveness and equity is still missing [225-236][229-240].
Anne Bouverot argued that misunderstanding fuels fear, and that accurate scientific panels are needed to inform both citizens and policymakers, using past job-loss predictions as an example of how evidence shapes policy choices [250-275]. Ajay Sood described India’s National AI Governance Framework, which combines public-private partnerships, techno-legal design, and capacity-building to manage risks while scaling AI services [283-300]. Singapore’s Minister Josephine Teo reinforced the need for sustained research investment, a balance between speed and caution, and international cooperation to create interoperable standards, positioning the UN as the legitimate hub for such coordination [320-340][345-352].
Across speakers, there was consensus that scientific assessment, shared benchmarks, and inclusive dialogue are critical to prevent fragmented regulations and to operationalise high-level AI principles such as transparency and safety [33-34][331-340]. The discussion concluded that a UN-anchored, multidisciplinary scientific panel can bridge evidence and policy, making AI governance both effective and trustworthy for global development goals [45-49][55][345-346][354-357].
Keypoints
Major discussion points
– Science as the foundation of global AI governance – The UN is building a practical architecture that puts science at the centre, creating an Independent International Scientific Panel to provide a shared baseline of analysis and interoperable technical standards so that “countries at every level of AI capacity can act with the same clarity” and “guardrails… can travel with the technology” [17-24][36-43][45-48].
– Bridging the science-policy gap amid uncertainty and rapid change – Yoshua Bengio stresses that AI research shows “very rapid growth… uneven… surpassing most people on some measurements and being kind of stupid… on others,” creating a lag between scientific evidence and policy decisions; he argues for neutral, fact-based evaluations that recognise uncertainty, highlight severe-risk clues, and help policymakers act despite limited proof [67-78][81-86].
– Industry’s role in fostering a common, evidence-based understanding – Brad Smith warns that debates often stall because “people don’t have a common understanding of the problem” and are “too quick to want to blame someone,” urging a shift from hype to facts and emphasizing that the UN is the best platform to build that shared scientific basis [103-149][143-148].
– Ensuring inclusivity and equity, especially for the Global South – Both Bengio and panelists highlight the need for a globally diverse, multidisciplinary panel that “makes sure that everyone is at the table and no one is on the menu,” and stress that evidence must be actionable for low-income contexts (e.g., COVID-19 experience, AI impacts on youth in India) and that equity should be at the heart of AI for the public good [90-94][213-218][225-236][320-334].
– Concrete steps: benchmarks, capacity-building, and operationalising principles – Singapore’s Minister Josephine Teo outlines concrete investments (a $1 billion AI R&D plan, a digital trust centre, AI safety institute) and calls for “standardized evaluation methodologies,” international cooperation on interoperable tools, and capacity-building so that high-level AI principles become actionable across jurisdictions [320-345].
Overall purpose / goal
The session was convened to launch and explain the United Nations’ new science-driven framework for AI governance-particularly the Independent International Scientific Panel-and to explore how robust, globally-shared scientific evidence can bridge the gap between rapid AI innovation and responsible policy, while ensuring inclusive participation from all regions and sectors.
Overall tone
The discussion began with a formal, urgent tone emphasizing the need for scientific grounding ([1-4], [17-24]). It shifted to a reflective, technical tone during the fireside chat, acknowledging uncertainty and the difficulty of translating science into policy ([67-86]). The industry contribution added a pragmatic, cautionary tone, warning against hype and urging common understanding ([103-149]). Throughout, the tone remained constructive and collaborative, moving toward optimism as panelists highlighted concrete initiatives and global cooperation ([320-345]). No major negative or confrontational shifts were observed; the conversation consistently aimed at building consensus and actionable pathways.
Speakers
– António Guterres
– Role / Title: Secretary-General of the United Nations
– Areas of Expertise: International diplomacy, multilateral cooperation, AI governance leadership
– Sources: [S3]
– Anil Ananthaswamy
– Role / Title: Moderator / Host, Author of The Elegant Math Behind Machine Learning
– Areas of Expertise: Science communication, machine learning, public engagement
– Sources: [S26]
– Brad Smith
– Role / Title: Vice Chair and President, Microsoft Corporation
– Areas of Expertise: Technology policy, AI regulation, corporate leadership, privacy & cybersecurity
– Yoshua Bengio
– Role / Title: Professor, Université de Montréal; leading AI researcher
– Areas of Expertise: Deep learning, AI safety, machine learning research
– Sources: [S19]
– Balaraman Ravindran
– Role / Title: Professor, Indian Institute of Technology Madras; Member, International Independent Scientific Panel on AI
– Areas of Expertise: AI, machine learning, applications in agriculture and education, AI policy implications for the Global South
– Sources: [S20]
– Ajay Sood
– Role / Title: Principal Scientific Advisor to the Government of India
– Areas of Expertise: National AI governance, digital public infrastructure, techno-legal frameworks, AI risk assessment
– Sources: [S23]
– Amandeep Singh Gill
– Role / Title: Under-Secretary-General and Special Envoy for Digital and Emerging Technologies, United Nations
– Areas of Expertise: Digital policy, emerging technologies, multilateral coordination, science-policy interface
– Sources: [S11], [S12], [S13]
– Anne Bouverot
– Role / Title: France’s Special Envoy for Artificial Intelligence; former Director General, GSMA
– Areas of Expertise: AI policy, digital trust, telecommunications, AI ethics and governance
– Soumya Swaminathan
– Role / Title: Former Chief Scientist, World Health Organization
– Areas of Expertise: Global health, evidence-based policy, pandemic response, scientific advisory leadership
– Josephine Teo
– Role / Title: Minister for Digital Development and Information, Singapore
– Areas of Expertise: Digital policy, AI R&D investment, AI safety and trust infrastructure, international AI governance
Additional speakers:
– None. All speakers appearing in the transcript are covered by the provided speakers-names list.
The session opened with Anil Ananthaswamy reminding the audience that “we cannot govern what we do not understand” and introducing United Nations Secretary-General António Guterres, whose leadership places science and multilateral cooperation at the heart of AI governance [1-4][5-15]. Guterres framed the challenge as a race against “AI innovation moving at the speed of light, outpacing our collective ability to fully understand it” and argued that policy must be built on trusted facts rather than hype or disinformation [10-16]. He noted that the UN has been “indispensable to not just the protection of people, but the preservation of our species” by helping humanity live with nuclear weapons without using them [120-124]. He announced the creation of an Independent International Scientific Panel on Artificial Intelligence, describing it as “fully independent, globally diverse and multidisciplinary” and intended to give every country, regardless of AI capacity, a clear analytical baseline [17-24][25-30]. The Secretary-General stressed that science-led guardrails protect human rights, preserve agency and accelerate innovation, positioning science as a “universal language” that can create interoperable technical standards so that a startup in New Delhi can scale globally with confidence [31-34][38-43][45-49].
In the subsequent fireside chat, Professor Yoshua Bengio highlighted the difficulty of reaching consensus among AI researchers, noting that “Scientists themselves don’t always agree on what to expect for the future” [70-71]. He argued that a neutral, fact-based synthesis is required to provide a shared understanding for policymakers and used a climate-tipping-point analogy to illustrate why precaution is needed even when evidence is incomplete: “if the risk has huge severity… then policymakers need to pay attention” despite a lack of proof [72-78]. Bengio also pointed out that the rapid, uneven growth of AI capabilities creates an inevitable lag between scientific publications and policy action, because studies involving people can take months while AI systems evolve week by week [81-84]. He suggested that governance should focus on “high-level principles that can be applied without having to go into the details because the details are going to change” [85-88]. This stance contrasts with Guterres’ emphasis on “aligning technical baselines, shared testing and risk measurement” to ensure interoperability and safety across borders [44-48].
Brad Smith, Vice-Chair and President of Microsoft, reinforced the need for a common, evidence-based understanding, warning that “people don’t have a common understanding of the problem” and that debates often devolve into blame without first agreeing on the problem’s context [143-148]. He invoked an 80-year economic-cycle theory to argue that the United Nations, created just over 80 years ago, remains humanity’s “greatest accomplishment” and an indispensable platform for coordinating AI governance [103-112][126-129]. Smith also noted that the UN has helped humanity “live with the ever-constant presence of nuclear weapons without using them,” echoing Guterres’ point on existential risk management [115-118]. He criticised the culture of grandiose predictions, noting that his own grading of industry forecasts yielded an average accuracy of only 25 % and that “there is no such thing as a crystal ball” [152-166][167-168].
Dr Soumya Swaminathan underscored the importance of inclusive, rapid evidence generation by comparing the AI challenge to the COVID-19 response. She described how, during the pandemic, her team reviewed “a couple of hundred publications every day” to issue timely recommendations, and she called for a global scientific body-“something like the IPCC” for AI-to provide fast, trustworthy evidence that can be adapted to diverse national contexts [206-214][217-220]. She warned that without such mechanisms, policy may be made in advance of evidence, risking irrelevance or harm, and emphasized that “policy must change when evidence becomes clear” [218-220].
Representing the Global South, Professor Balaraman Ravindran highlighted the paucity of data on AI’s social impacts in India, questioning how AI affects youth, children’s mental health, and agricultural productivity and noting that most stories come from the West and that “we don’t have evidence of AI interventions” in education or farming [229-236]. His remarks illustrate the need for locally-generated benchmarks to evaluate AI’s effectiveness and equity, especially in low-resource settings [229-236].
Anne Bouverot, France’s Special Envoy for AI, echoed the theme that misunderstanding fuels fear. She quoted Marie Curie-“nothing in life is to be feared, everything is to be understood”-to argue that scientific panels are essential for both citizens and policymakers [250-259]. Bouverot also cited past job-loss predictions, referencing both the Oxford and Elon Musk forecasts, to show how divergent scientific forecasts lead to vastly different policy responses, from universal basic income to reskilling programmes, underscoring the necessity of accurate, evidence-based forecasts [268-275].
Ajay Sood described India’s National AI Governance Framework, which combines public-private partnerships, “techno-legal” design, and capacity-building to embed governance directly into AI systems, mirroring the country’s earlier digital public-infrastructure experience [283-300]. He acknowledged the current uncertainty about AI risks but argued that embedding safeguards at the technical level offers a pragmatic path forward [291-300].
Singapore’s Minister Josephine Teo presented a concrete investment agenda, noting a US$1 billion national AI R&D plan that funds foundational and applied research on responsible AI [320-322]. She described the Digital Trust Centre and the AI Safety Institute as national assets that operationalise safety standards. Teo stressed the need to balance rapid AI development with careful, evidence-based policy, arguing that “both impulses are necessary” and that international cooperation is essential for interoperable standards [323-326][327-334]. She reiterated the UN’s unique legitimacy for global AI discourse, citing the UN High-Level Advisory Body on AI report (published end-2024) as the basis for the new Independent International Scientific Panel, and warned that operationalising high-level AI principles through standardized evaluation methodologies and capacity-building for all countries is the current challenge [335-345]. Regional actions announced included Singapore’s hosting of the International Scientific Exchange on AI Safety (first edition) and its second edition on 17-18 May [350-353], the Singapore AI Safety Red-Team Challenge (the first multicultural, multilingual exercise for the Asia-Pacific), Singapore’s chairmanship of the ASEAN Work Group on AI Governance and the development of the ASEAN Guide on AI Governance and Ethics (extending to generative AI) [360-363], and an India-wide collaboration on the International Network for Advanced AI Measurement, Evaluation and Science for joint testing efforts [364-367].
Moderator Amandeep Singh Gill framed the discussion as a “science-evidence-policy loop,” opening with the technical observation that “≈ 90 % of AI is matrix multiplication; a 0.01 % improvement in its efficiency has huge energy implications” [190-192]. He linked the Independent International Scientific Panel’s work to turning “facts and evidence” into a reliable engine for the Sustainable Development Goals [201-206][309-312]. Gill’s rapid-fire round reinforced the consensus that science must be central, that common technical baselines are vital for interoperability, and that inclusive evidence-generation is essential for equitable outcomes [241-246][309-312].
In conclusion, the participants reached broad agreement that science is the indispensable foundation for AI governance and that the UN-anchored Independent International Scientific Panel will provide the neutral, multidisciplinary evidence needed to bridge the gap between fast-moving technology and responsible policy. Action items include fast-tracking the panel’s first report ahead of the Global AI Governance Summit in July [22-24]; Singapore’s commitment to host the second International Scientific Exchange, develop regional safety benchmarks, and advance the ASEAN Guide and the International Network for Advanced AI Measurement [350-353][360-363][364-367]; Microsoft’s pledge to devote resources to UN-led scientific efforts [182-183]; and India’s rollout of its National AI Governance Framework with techno-legal safeguards [283-300]. The session closed with Minister Josephine Teo reaffirming the UN’s role as the legitimate hub for global AI discourse, urging continued collaboration to turn scientific insight into trustworthy, inclusive governance [335-345][350-353][354-357].
Today’s session begins from a simple but powerful premise. We cannot govern what we do not understand. It is my honor to open this session with a special address by the Secretary General of the United Nations, whose leadership has placed science and multilateral cooperation at the forefront of global AI governance. So please join me in welcoming His Excellency Antonio Guterres.
Thank you very much. There is a computer here. I don’t know to whom it belongs. Excellencies, ladies and gentlemen. Thank you for joining this discussion on the role of science in international AI governance. We are barreling into the unknown. AI innovation is moving at the speed of light, outpacing our collective ability to fully understand it, let alone govern it. AI does not stop at borders, and no nation can fully grasp its implications on its own. If we want AI to serve humanity, policy cannot be built on guesswork. It cannot be built on hype or disinformation. We need facts we can trust and share across countries and across sectors. Less noise, more knowledge. That is why the United Nations is building a practical architecture that puts science at the center of international cooperation on AI.
Thank you for watching. and it starts with the Independent International Scientific Panel on Artificial Intelligence. This panel is designed to help close the AI knowledge gap and assess the real impacts of AI across economies and societies so countries at every level of AI capacity can act with the same clarity. It is fully independent, it is globally diverse, and it is multidisciplinary because AI touches every area of every society. And I’m delighted that the General Assembly of the United Nations confirmed the 40 experts I proposed to member states. Now the real work begins on a fast track to deliver a first report ahead of the Global Summit. The Global Dialogue on AI Governance in July. The panel will provide a shared baseline of analysis.
helping member states move from philosophical debates to technical coordination, and anchor choices in evidence so policy is neither a blunt instrument that stifles progress nor a bystander to harm. That is how science transcends decision -making. When we understand what systems can do and what they cannot, we can move from rough measures to smarter, risk -based guardrails. Guardrails that protect people, uphold human rights, and preserve human agency. Guardrails that build confidence and give business clarity so innovation can move faster in the right direction. Science -led governance. Governance is not a brake on progress. It is an accelerator for solutions. A way to make progress safer, fairer, and more widely shared. It helps us identify where AI can do the most good the fastest.
And it helps us anticipate impacts early, from risks for children, to labor markets, to manipulation at scale. So countries can prepare, protect, and invest in people. Today, international cooperation is difficult. Trust is strained, and technological rivalry is growing. Without a common baseline, fragmentation wins, with different regions and different countries operating under incompatible policies and technical standards. A patchwork of rules will raise costs, weaken safety, and widen divides. Science is a universal language. Guided by the independent panel and the global dialogue on AI governance, we can align with the world. We can align our technical baselines. When we agree on how to test systems and measure risk, we create interoperability. So a start -up in New Delhi can scale globally with confidence because the benchmarks are shared, and safety can travel with the technology.
Finally, let us be clear. Science informs, but humans decide. Our goal is to make human control a technical reality, not a slogan. And that requires meaningful human oversight in every high -stakes decision, injustice, health care, credit. And it requires clear accountability so responsibility is never outsourced to an algorithm. People must understand how decisions are made, challenge them, and get answers. Excellent. Thank you, ladies and gentlemen. The message is simple. Less hype, less fear. More facts and evidence. Guided by science, we can transform AI from a source of uncertainty into a reliable engine for the sustainable development goals. Let us build a future where policy is as smart as the technology it seeks to guide. Thank you.
Thank you, Secretary General, for those inspiring opening remarks. Ladies and gentlemen, we were going to have Mr. Brad Smith. Vice Chair and President of Microsoft Corporation as our next speaker, but he’s running a bit late, so we will move to the next item in the agenda. I would like to welcome Professor Yashwa Bengio to the stage, Scientific Director of MILA and one of the world’s leading AI researchers. He and I will be in a fireside chat and we’re hoping that Mr. Brad Smith will be able to join us very soon. Thank you. So, welcome Professor Bengio.
Thank you for having me.
Our pleasure. So, you are the most cited computer scientist And I looked it up. You’re actually the most cited living scientist today and have played a unique role at the global science policy interface, including through the UN Scientific Advisory Board and your leadership of the International AI Safety Report. So from your perspective, how do these science policy interfaces actually work in practice and where do they add the most value?
So it’s tricky, right, because there are many different views, especially different interests in business, in different governments. And the role of science, the role of a kind of synthesis of science that we want for the UN panel, that we have seeked for the… AI Safety Report. is to try to make it, to provide a shared understanding as a basis for those political discussions and not be influenced by as much as is humanly possible by those tensions that exist in our societies. And I think it’s particularly important because maybe unlike in the case of climate, the scientists themselves don’t always agree on what to expect for the future or even how to interpret the science that exists.
I just want to add something. So something that’s a little bit subtle about this kind of exercise is that to be able to recognize the uncertainty and the divergences that exist, and where is it that scientists agree, where is it that the evidence is strong, where is it that we have clues that matter. Even if we’re not certain about a particular risk, we might have clues about it. But if the risk has huge severity, in other words, if it does unfold, then it could be catastrophic, then policymakers need to make attention. And it’s always difficult when we don’t have proof that something terrible is going to happen. Maybe a good analogy is tipping points in climate, right?
Because there’s not enough past evidence to be sure that a particular tipping point is going to happen. So the situation is similar in AI in the sense that we don’t have the experience of, say, machines that are really smart and can change society, and be even potentially smarter than us. So how can we deal with the right policy decisions? but that’s why it is so important to have as neutral and as fact -based evaluation of what is going on available to everyone and in a language that is accessible to everyone and of course for policy makers which by the way is difficult for scientists to achieve they need help, they need iterations they need feedback from people who are used to the interface between science
Is there anything in particular about the highly technical nature of AI and also the pace of change that makes this interface particularly difficult?
Yes, yes The facts shown in the scientific benchmarks across labs, companies and academia on AI show very rapid growth of the capabilities of these systems and the capabilities of AI and the capabilities of AI and the capabilities of AI and the capabilities of AI and the capabilities of AI and that growth is uneven so we see AIs even surpassing most people on some measurements of capability and being kind of stupid or like a six -year -old on some other things so it’s very difficult to grasp what that means but because it’s moving so fast there’s always going to be a lag between even like the scientific papers take time to be written if there are studies so think about studies that involve people they’re going to take months and so by the time we start seeing clues that there’s a potential problem so you can think of something recent that was not expected like the psychological effects on people of these chatbots we now have lots of anecdotal evidence and we’re only starting to see the scientific studies and of course on the policy side it’s going to be even more difficult even later because those discussions are going to happen after we see scientific evidence so there is going to be a lag and that’s a real problem because things could move
So maybe that leads well into our next question. We often hear that AI governance is moving too slowly and from your experience, what kinds of scientific assessments or benchmarks could realistically keep pace with this rapid change?
Yeah, that’s a great question. My opinion on this is that we should be thinking about not just policy and the usual sense of coming up with principles, but we should try to strive for high -level principles that can be applied without having to go into the details because the details are going to change. And the second thing is I think we should strive for technology that are going to help to implement those guardrails in the field, in the deployment of AI, because otherwise there’s not enough time to
Well, thank you for those insights. And also congratulations on your recent appointment to the Independent International Scientific Panel on AI. In a few words, how do you see this new panel helping to strengthen the link between science and global AI policymaking?
Well, I think there’s something really important about this panel, and it’s global aspect and being rooted in the UN. And the reason I’m saying this is that AI is going to be transforming our world very clearly, and it’s going to have global effects, whether it is on the good side, the benefits are on the risks, but also the kind of power relationships that are going to be changing in the future. And I’m personally very concerned about how this will unfold for developing countries in the global south. And we need to work. Thank you. in a multidisciplinary array so that we can foresee those effects and we can start discussions to make sure that everyone is at the table and no one is on the menu.
Well said, Professor Bengio. Well, thank you very much for kick -starting our discussion. We will now turn to our panel. So, ladies and gentlemen, it is essential that discussions about AI policy include the voices of key industry actors, and I am pleased to invite Mr. Brad Smith, Vice Chair and President, Microsoft Corporation, for his keynote address.
Well, good morning, everyone. It’s a pleasure to be here. My apologies for being a few minutes late. I want to offer a couple of thoughts this morning. The first thing I think we should come together to think about is that, in my opinion, this is a moment in time when we need to reflect on and reinvest in the importance of the United Nations. There is a well -known economic theory that says that humanity is, in many ways, almost destined to repeat its great economic mistakes every 80 years. The reason it’s 80 years is because that is basically the lifespan of human beings. And so every 80 years, almost everyone who had any living memory of a prior financial calamity has left the planet.
If you look at the Great Recession that started in 2008, what you realize is that it happened 79 years after the stock market crash that led to the Great Depression in 1929. And you can follow this series of financial mistakes all the way back to the bursting of the tulip bubble in the Netherlands hundreds of years ago. I think there is a corollary worth thinking about. Just as there is a risk that humanity forgets the mistakes it made 80 years ago, humanity runs the risk of forgetting the great successes. it created 80 years ago. It was just over 80 years ago that the world came together to create the United Nations. It was, in my opinion, one of humanity’s greatest accomplishments of the 20th century.
It is a unique organization in a very imperfect world. And so, of course, on any day and any year, it is possible for anyone to blame the United Nations for the imperfections that we see all around us. But the truth is this. Those imperfections are fewer, and their consequences are less disastrous, in my view, because of the United Nations. And one of the great things about working at Microsoft in a job like Microsoft, in my opinion, is that I get to work in a global organization. We have subsidiaries in 120 countries. We do work in 190 countries. We see the world. It turns out that everywhere we go, we see the United Nations. Sometimes it’s the United Nations Development Program, working to foster economic development.
Sometimes it is UNHCR, helping refugees. Sometimes it is the UN Office of Human Rights, seeking to protect human rights. But the truth is, if there’s a problem, the United Nations is almost always part of the solution. We need to remember this. And we need to remember that however challenging the last 80 years have been, we have managed, as humanity, as a species, to live. with the ever -constant presence of nuclear weapons without using them or destroying ourselves. The United Nations has, in fact, in my view, been indispensable to not just the protection of people, but the preservation of our species. Why does that matter now? Why should we talk about it today and this week in Delhi?
Well, because here we are on the cusp of the future. A technology that we all know will likely change the future. Here we are in the second month of the second quarter of the 21st century, and we need to focus on how we bring the institutions on which we rely into that future. So then let me talk about a second aspect that I think is so important to think about this month. One of the things I’m constantly struck by… leading a global organization is how often everyone disagrees with each other about almost everything. But one of the things I’ve learned along the way is that I think one of the reasons people so quickly disagree is that we rush so quickly to debate competing solutions.
This happens in domestic politics. It happens in international diplomacy. It, frankly, happens in a global company. It actually happens everywhere, even in families. As soon as there’s a problem, people want to talk about the solution. And then people have different solutions, and then they debate, and they disagree, and they argue, and sometimes it’s even worse than that. One of the things I’ve learned is the reason people so often disagree about the solution is they don’t have a common understanding of the problem. They don’t spend enough time talking about the problem. They don’t have a shared contextual understanding. of the problem they’re trying to solve. They’re too quick to want to blame someone for the problem, and then that spirals into a discussion that becomes completely unconstructive.
Why does that matter today? Because what we’re here to talk about today is all about creating a more common understanding together based on science of where artificial intelligence is going. This is an indispensable tool. Indeed, it’s a critical service for humanity so we can all learn together, we can all think together, we can all understand together what is going on in the world. I think it’s especially critical, to be honest, when it comes to artificial intelligence because I think if you even communicate, consider most of the conversations you have about this technology. I would argue that it has two flaws. The first flaw is it usually involves people making very grandiose predictions about the future.
You know what? I’ve worked in the tech sector for 32 years. I have listened for more than three decades to my colleagues in my industry around the world make bold predictions about the future. No one ever holds them accountable a decade later for whether they were right or wrong. I used the researcher agent in Microsoft Copilot a couple weekends ago, and I loaded a lot of names. I won’t say whom, but you can guess. And I said, look at all the predictions they made about all the technologies, and look at the predictions they made about when these technologies would come to do something or another, and give them a grade. The average grade was 25%. You couldn’t even get close to the top.
You were at the bottom. So let’s just understand one thing together. There is no such thing as a crystal ball. No one has one. But what we do have is the ability to understand where we are today. And what we do have is a better understanding to just appreciate what is happening each and every year. There is a second flaw, in my view, in many of the conversations that take place, including at this AI summit. Everybody wants to talk about how they’re going to make machines smarter. That’s interesting. I think it’s interesting to imagine living in a world where a data center is like a country of geniuses. But as I mentioned yesterday, compared to the people who lived in the Bronze Age, we’re all geniuses.
We’re all geniuses already. What that should remind us… is that human capability is neither fixed nor finite. And so what really matters, in my opinion, is not whether we are going to build machines that are smarter than humans. Yes, in some ways we will. But how will we use those machines to make people smarter, to help us do what we need to do? That is what this effort is all about. Wow. Let’s harness the power of science to build a common understanding of what is changing each year, and then let’s connect it with the global dialogue on governance so we can pursue policies that will ensure that this technology serves people. There’s no better place to get started than here.
There’s no better time than now. And let’s face it, there is no better institution on the planet that can do more to serve humanity and protect the world. than the United Nations. And on behalf of Microsoft, I just want you to know we are putting our full energy and resources to do everything that we can to help. Thank you very much.
Thank you. Thank you, Mr. Smith, for those insights on responsibility, accountability, and the role of industry. We now turn to our panel. Our panel brings together scientific leadership, public policy expertise, and international coordination. Please welcome to the stage our speakers, Professor Balaraman Ravindran, IIT Madras, Swaminathan, former Chief Scientist, WHO, Ajay Kumar Sood, Principal Scientific Advisor to the Government of India, and Anne Bouveraud, France’s Special Envoy for AI. I am also pleased to introduce our moderator, Amandeep Singh Gill, Undersecretary General and Special Envoy for Digital and Emerging Technologies. I invite him to guide the discussion. Thank you very much.
Thank you very much. Thank you, Anil, for leading us and for those who have not read his book, The Elegant Math Behind Machine Learning, please do have a go at it. We cannot govern something that is not possible. Something that we don’t understand. So something as simple as, like, if 90 % of AI is matrix multiplication, a 0 .01 % as he was explaining, improvement in efficiency of matrix multiplication has huge energy implications. So I want to welcome our esteemed panelists. The stage has been set by very inspiring keynotes and a fireside chat. So we will dive straight in. And since we are running a little short of time, I’m going to compress the two rounds into one rapid -fire round.
So all of you have worked on or are working on the science policy interface. And my sense is that there is a loop here, that there is a loop between science and evidence, and evidence and… and policy. And we want to explore that loop today in the context of the significant development of the setting up of the International Independent Scientific Panel at the United Nations. So I want to start with you, Soumya. You were the first chief scientist, first woman chief scientist at the WHO and worked at a very difficult time during the COVID when evidence, trusted evidence was so critical. So in your view, what makes this evidence that comes from science trusted and actionable for policymakers?
The evidence is very rapid. The field is moving so rapidly. In COVID, we had to review a couple of hundred publications every day to understand what was happening on different aspects, on the virus, on the immunology, on how vaccines were working and drugs, and we had to make recommendations based on the best available evidence that day. I think we may be in a similar situation with AI, and it’s wonderful that the UN has now set up this body, which I see as something like the IPCC. I think we do need global governance. We need something like, you know, we’re talking now about preventing future pandemics by sharing data on pathogens, making sure that we have protocols in place where countries are willing to share that data, and also, of course, to share the tools, the vaccines or drugs when they become available, when or in case there is another pandemic.
Similarly, I hope that this scientific body that’s been set up by the UN would also establish systems that would, would link to national bodies and systems, and that would ensure the voices of all are heard. So one of the things during COVID was some of our recommendations were relevant. in high -income countries but not in low -income countries because the context is very different. And the WHO was criticized for this, I think rightfully so, and we need to learn from those mistakes. So it’s the voices, for example, of women, a low -income woman, a farmer in a remote place, is going to use technology very differently from a large farmer with access to lots of machines in Europe or North America.
So if AI has to work for everyone, then we need to make sure that those voices are heard. And ultimately, I think that loop you talked about, sometimes policy is made in advance of evidence. You have to. You can’t wait. But the policy must change. It must ask for the relevant evidence and be able to adapt when that is clear.
Thank you very much, Soumya. I’m going to come to you, Ravi, Professor Balaraman Rabindran. Now, as AI policies begin to take shape and you’ve been involved in some policymaking yourself, what signals from… regulators or public sector users should most urgently guide future AI research priorities? So in a sense, you know, the loop coming back into research.
so thank you for that question so I mean AI right now especially in the global south so we don’t completely understand the implications of adopting AI and how is it going to affect the society, the people livelihood and everything in fact I also feel that we don’t have enough evidence about how AI is even affecting the social fabric how are children getting increasingly isolated with the adoption of AI and whether the effect is uniform between cities and rural India because the cultural setup is very different and so on and so forth so if the government as we heard our honourable prime minister say yesterday should focus more on youth and the impact of AI on youth what is the evidence do we have about what is happening in India so we hear stories about you know how there is dependence of on AI models of children and also people who are mentally challenged and so on and so forth who are under stress but all of these stories are coming to us from the west so what is it that’s happening in India so when we have these kinds of policy decisions that have to be made the government says that AI should be pushing efficiency in agriculture so do we have a benchmark in India that can evaluate the efficiency of effectiveness of these AI models in agriculture what are the kinds of flaws that happens when I for example build a bot that can act as a co -pilot for a farmer so these are bigger challenges so we have a lot of questions
if I can quickly follow up where do you actually see evidence for impact in the sustainable development goals space just a quick example or two
so I I That was not in the notes he gave us earlier, so I have to think on my feet here. So let me take one thing that we are very familiar with, we are working on right now, is on the education space, right? So, for example, we don’t know, we don’t have evidence of AI interventions. How likely is it to change student learning behavior? So we have done some preliminary studies. So the author of the study is somewhere in the audience, because he has been sending me pictures of the stage. So what we have found out is the effectiveness of AI adoption is a direct function of habit. So if the students are using AI more, then they tend to…
But now I don’t know what is the causal factor there. I don’t know if the causal factor is whether they are using AI more, therefore they get better effect, or do they use AI more because they are getting better effect. So these are questions that we have to ask. Even in something as simple as education. I am saying simple because there is a lot of positive buzz around using AI in education. But even there, we need a lot more evidence to come.
Thank you, Ravi, and we’re honored to have you on the new International Independent Scientific Panel. So if I may jump to you, Anne, and you’re an AI scientist yourself. You know, all of us know you as a special envoy of President Macron, who made the February summit happen last year in Paris, but you’re also an AI scientist. So from your perspective, you kind of lived in these two worlds. So what works best for the interface? What kind of scientific evidence would you take to President Macron if you were to convince him to change the policy?
Well, thank you for the question. I studied AI a long time ago, but I’m not really a scientist. But I try to understand, of course. Understanding, I think, is probably the very first thing. And before we move to policymakers, I think it’s for citizens, for us as human beings. The things that we don’t understand… We tend to be more afraid of. I often quote scientist Marie Curie. She wasn’t an AI scientist, but she’s one of the brightest scientists that we’ve had, two times Nobel laureate. And there’s a wonderful quote by her. She says, nothing in life is to be feared. Everything is to be understood. And now is the time to understand more because, of course, there are more things we can be afraid of at the time when she was living and now as well.
So trying to understand things, having scientific panels is definitely the right thing to do. And we’re fully supportive in France of the scientific panel. We’re very proud that Joëlle Barral is our nominee. She’s a scientist in AI and health and a member of the panel. This is absolutely excellent. So, yes, understanding things. is absolutely key. And then maybe just a second point to give an example of how understanding something or not can lead to very different policy decisions in the field of AI and work. We’ve had predictions. I remember in 2013, that was the previous AI revolution, but scientists, I believe, at Oxford said within 10 years, half of the jobs will disappear. We haven’t seen that.
At the AI summit in Bletchley Park, for very good reasons, we had frontier AI leaders in particular, Elon Musk saying within two years, half of the jobs will disappear. So, of course, the fact that this didn’t happen doesn’t mean that there isn’t a risk for work. Of course, there’s a risk for work. But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism. Basic income, what are we going to do with all the people who don’t have jobs? If what economists are saying is that 80 % of the jobs will be transformed, then the policy outcome is training, skilling, reskilling, and helping to educate people. That’s why listening to economists and having the International Labor Organization and other institutions really follow closely what is happening in which countries for younger people, for older people, for women, for men, for different types of jobs, that’s super
Merci beaucoup, Anne. Merci. And I’m going to turn to you, Professor Sood. You occupy an important position within the Indian system, and you look at science broadly. And India has deployed some of these technologies at societal scale. India stack the digital public infrastructure. So how do you look at the AI opportunity, and importantly, how do you look at AI risks? And how are you prioritizing R &D allocations to harness the opportunities, manage the risks?
Thank you very much for having me on the panel. As you know that all the aspects which you asked, we have had very extensive consultations across all stakeholders. And we came out with the National AI Governance Framework, not the regulatory framework, but how do we really handle AI, all aspects. And there we have looked at how do we enable the compute facility, compute resources to our people. Because we are not at the scale when a few trillions of dollars are being invested. So we came out with some framework which we think with public -private partnership we could enable it. And we could see the results of that within a year as demonstrated in AI Summit.
Summit, the release of AI, so on, models and so on. Other aspect which is very important, as you rightly said, the risk assessment. So this is where, as has been mentioned, our experience with the digital public infrastructure, which has been rolled on a very public scale with the safety and security, which is as difficult as in AI. AI, of course, is more difficult. We still do not know the risks. But when we were dealing with the digital public infrastructure, either for the financial transactions or for identity, identity verification and so on, it was a challenge. And that was done by embedding governance through technical design. And this is what we call techno legal, which Honorable Prime Minister said in the Paris summit.
And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everything is laid out. We will need framework for that. We will need technologies for that. But this is one way which will have a smooth. interaction if we can bring this technological framework.
Thank you so much for those insights. And now that since we are running out of time and I’m going to discriminate against the men on the panel so my apologies in advance. So I’m going to turn back to you Soumya and Aan for like 40 second, 30 second reflection. What do you think in terms of the pace and direction of technology opportunities including for accelerating scientific discovery and risk. What would be your advice for the international independent scientific panel maybe Anne you can go first 40 seconds.
Yes I think AI has a strong potential for helping science we’ve seen that with the two Nobel prizes in physics and chemistry a year back. There’s many more areas in science where AI can help. It can only be possible if we have databases of scientific data that are available to the world and that are constructed by scientists and funded by governments and international institutions around the world. So this is a very important topic for research.
Thank you, Anne. Soumya, you have the last one.
Yes, I agree very much with Anne. And I think that the scientific panel could actually help network many more groups of scientists from around the world, perhaps sectorally, for example, what’s happening in health, what’s happening in education, what’s happening in agriculture, looking at the evidences as they emerge, encouraging research, setting priorities, but also looking at safety and risks, because I think that’s going to be very important. There may be unanticipated risks and harms that we have not considered. And, of course, equity, being a UN -led panel, ensuring that equity is at the heart of AI and it’s being done for public good.
fantastic thank you that’s a great closing ladies and gentlemen please join me in thanking our outstanding panel and we are going to move straight to the closing over to you Anil
thank you to the panel for a rich and forward looking discussion to close this session it is my honor to invite Josephine Teal minister for digital development and information of Singapore to deliver the closing remarks minister Josephine Teal
good morning everyone first allow me to thank the secretary general for his remarks and it serves as a very useful guidance to all of us working in this important technology for the closing this morning I thought that it would perhaps be useful to offer a perspective from a small state Singapore has a population of just 6 million people and more than 30 years ago at the UN we became the convener of the Forum of Small States which still has about 108 members I will just make three points on how we look at developments on this front The first point is that we believe in AI being used as a force for the public good but to do so, it is important that we continue to invest in the science that underpins it and ground trust in evidence This certainly requires sustained investment in research and is also the reason why we set aside a billion dollars in a national AI R &D plan which will include foundational and applied research into responsible AI We believe in it and we have to put money behind this effort There are of course other investments such as in building up a digital trust center.
It’s our designated AI safety institute that has been participating in important conversations on this topic, as well as setting up a center for advanced technologies in online safety. So those are just some of the efforts that we can dedicate resources to doing as a small state. The second point I want to make is that there is almost always going to be a tension between moving quickly, given the pace of AI development, and moving carefully, giving the latest evidence that presents themselves on what we should be paying attention to. Both impulses are necessary, and we believe it is not impossible to try and balance them through integration of science and policy. It is not easy, but it is not an effort that we must give up on.
I should just add that on this score, it will be much better if we can cooperate internationally to develop sound approaches that can also be interoperable across different jurisdictions. And this is one effort that we believe underpins the work that is being carried out by the UN. And this brings me to my third point. I want to highlight the important role that an organisation like the United Nations plays in facilitating global discourse to bridge science and policy. I cannot overemphasise the importance of this effort. We must recognise that global AI governance landscape is becoming increasingly fragmented. There are multiple initiatives, frameworks and institutions. The UN’s unique value lies in your legitimacy and inclusiveness to encourage interoperability across efforts.
The Secretary -General talked about this too. We therefore welcome… We welcome the establishment of the… independent international scientific panel on AI, building on the work of the UN High -Level Advisory Body on AI, which published its report on governing AI for humanity at the end of 2024. We note that the panel’s multidisciplinary approach, covering machine learning, applied AI, social science, ethics, all of these are necessary to address the complexity of AI governance challenges. Finally, I would just like to acknowledge that we now have substantial convergence on the high -level AI principles. Yoshua talked about this. Transparency, accountability, fairness, safety. But the challenge is in operationalizing them. We need to find standardized evaluation methodologies that work across different regulatory contexts.
We need capacity building so that all countries can meaningfully engage with the technical and the technical challenges. We need to work with the technical evidence and not just with the large AI research ecosystems. I would encourage all stakeholders to view scientific input not as a constraint on policy flexibility, but as a constraint on policy flexibility. as a foundation for more durable, effective governance that can maintain public trust. We need to keep the conversations going, one where science informs governance, and governance sharpens science. I would just perhaps end by highlighting Singapore’s continued commitment to contribute to advancing these discussions. We were very fortunate to host the International Scientific Exchange on AI Safety and to bring about the Singapore Consensus on Global AI Safety Research Priorities.
Joshua was in Singapore for this very momentous event. We will continue to participate in joint testing efforts of the International Network for Advanced AI Measurement, Evaluation and Science. We have organized two editions of the Singapore AI Safety Red Teaming Challenge, the first multicultural and multilingual AI safety red teaming exercise focused on the Asia -Pacific region. And as chair of the ASEAN Work Group on AI Governance, we have actively spearheaded efforts to foster a trusted environment in ASEAN by adapting global norms and best practices for ASEAN and in bringing about regional harmonization through the ASEAN Guide on AI Governance and Ethics, as well as expanding it to address the risk in generative AI. We are now working within ASEAN to explore practical tools for AI safety testing and aim to collectively develop a set of AI safety benchmarks that reflect our region’s concerns.
And finally, I’d like to welcome all colleagues to join us in Singapore for the second edition of the International Scientific Exchange, which we expect to take place on the 17th and 18th of May, and we look forward to furthering
Thank you very much once again. Thank you, Mr. Teo, for your closing remarks. This session is now concluded. Thank you very much. Thank you.
During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existing frameworks like the IAEA, ICAO, or IPCC could serve as models for effective g…
EventInternational Cooperation and Framework Coordination The UN’s role should focus on providing independent scientific research through the Scientific Panel and using its convening power for global dial…
EventGlobal governance is needed with systems linking national bodies to ensure all voices are heard, especially from developing countries Scientific input should be viewed as a foundation for effective g…
EventChina:President, China, thanks. Foreign Minister Cassius for presiding over the meeting. I listened carefully to the presentations of experts and scholars. They are very inspiring. A new technological…
EventThere’s a gap between scientific reports and actionable policy guidance that could be filled with evidence-based policy option analysis
EventKremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development disparities if policymakers at national and multilateral levels take appropriate actio…
EventIn conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It highlights the disparity between the rapid pace of technological change and the rel…
EventBut as I mentioned yesterday, compared to the people who lived in the Bronze Age, we’re all geniuses. We’re all geniuses already. What that should remind us… is that human capability is neither fixe…
Event_reportingHumanitarian actors need to be aware of the different nuances of the term ‘evidence-based’, particularly where it is used to (de)legitimise actions and expenditures. Where individ…
ResourceThe panel discussion addressed needs of the Global South, with particular focus on capacity building for women and youth
EventEbtesam Almazrouei:Thank you, Fred. Your Royal Highness, Your Excellencies, esteemed guests, allow me first to extend my sincere gratitude to the ITU and also to the AI for Good team and especially my…
EventAt the 2024 Internet Governance Forum (IGF) in Riyadh, the Data and AI Governance coalition convened apanelto explore the challenges and opportunities of AI governance from the perspective of the Glob…
UpdatesFadi Daou:So, thank you and welcome everybody to this very important session at the WSIS in this rainy weather. Today, I hope that you will have a better weather during the weekend, but very busy also…
EventWhile the panel focused heavily on Global South inclusion, an audience member challenged this narrow focus by highlighting that human diversity includes neurodiversity, disabilities, and generational …
EventMinister Teo offered insights from Singapore’s experience navigating AI development amid great power competition. Singapore’s strategy centers on operating as a “trusted node”—maintaining a “pro-Singa…
EventThis observation set the analytical framework for much of the subsequent discussion. It influenced Minister Teo’s detailed aviation safety analogy and Minister Gobind’s emphasis on building sustainabl…
Event“António Guterres’ leadership places science and multilateral cooperation at the heart of AI governance.”
The knowledge base records Guterres emphasizing the importance of science in global AI governance and calling for evidence-based, multilateral approaches [S20].
“Policy must be built on trusted facts rather than hype or disinformation.”
Guterres called for replacing hype and fear with shared, evidence-based approaches to AI policy [S5].
“The creation of an Independent International Scientific Panel on Artificial Intelligence, described as fully independent, globally diverse and multidisciplinary, to give every country a clear analytical baseline.”
The panel is identified in the knowledge base as the first global scientific body on AI, independent and multidisciplinary, intended to provide expert evidence for all nations [S92] and [S93].
“The UN has been “indispensable to not just the protection of people, but the preservation of our species” by helping humanity live with nuclear weapons without using them.”
The knowledge base highlights the UN’s broader indispensable role in preventing regional crises and preserving humanity, though it does not specifically mention nuclear-weapon deterrence [S90] and [S91].
“Rapid, uneven growth of AI capabilities creates a lag between scientific publications and policy action because studies involving people can take months while AI systems evolve week by week.”
The pacing problem between fast-moving technology and slower governance is documented in the knowledge base, underscoring the same lag described by Bengio [S47].
“The United Nations, created just over 80 years ago, remains humanity’s greatest accomplishment and an indispensable platform for coordinating AI governance.”
The UN’s indispensable nature and its 80-year history are affirmed in the knowledge base, which describes the organization as essential for global cooperation and crisis prevention [S90] and [S30].
The discussion shows strong consensus that science must be at the heart of AI governance, that common technical standards and shared baselines are vital for interoperability, that inclusive and equitable evidence‑generation is essential, and that hype should be replaced by factual, evidence‑based policy. There is also agreement on the need for sustained investment and capacity building to support these goals.
High – The convergence across UN leadership, academia, industry and regional representatives indicates a solid foundation for coordinated, science‑driven AI governance, increasing the likelihood of effective global policy frameworks.
The speakers largely converged on the need for science‑based, evidence‑driven AI governance, the importance of international cooperation, and the role of the UN‑backed scientific panel. The main points of contention concerned the preferred mechanism for translating science into policy – high‑level principles versus detailed technical standards – and the balance between global versus local evidence generation.
Low to moderate. While there are nuanced differences in implementation strategies, there is broad consensus on overarching goals. This suggests that future work on the Independent International Scientific Panel can progress with relatively smooth coordination, though careful attention will be needed to reconcile principle‑based approaches with concrete technical standards and to integrate both global and local evidence streams.
The discussion was driven forward by a series of pivotal remarks that repeatedly returned to the need for shared scientific baselines, humility in the face of uncertainty, and concrete mechanisms for translating evidence into policy. Guterres’ framing of science as a universal language set the stage, while Bengio’s climate‑tipping‑point analogy and Brad Smith’s critique of hype sharpened the focus on precautionary, evidence‑based governance. Contributions from Swaminathan and Bouverot linked these ideas to real‑world crises and labor policy, respectively, and Sood’s ‘techno‑legal’ proposal offered a tangible design pathway. Josephine Teo’s closing synthesis tied the threads together, reaffirming the UN’s role as the integrator of speed, safety, and inclusivity. Collectively, these comments redirected the conversation from lofty aspirations to actionable, interdisciplinary strategies, shaping a nuanced, forward‑looking consensus on how science can effectively inform global AI governance.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

