Why science metters in global AI governance

20 Feb 2026 18:00h - 19:00h

Why science metters in global AI governance

Session at a glance

Summary

This discussion focused on the critical role of science in international AI governance, centered around the United Nations’ establishment of an Independent International Scientific Panel on Artificial Intelligence. UN Secretary-General António Guterres opened by emphasizing that effective AI governance requires facts rather than hype, announcing the confirmation of 40 experts to the new scientific panel designed to provide evidence-based analysis for global AI policymaking.


The conversation highlighted the unique challenges of governing AI technology, particularly its rapid pace of development and global reach that transcends national borders. Professor Yoshua Bengio, a leading AI researcher and panel member, discussed the difficulty of achieving scientific consensus on AI risks and benefits, drawing parallels to climate science where uncertainty about catastrophic outcomes still requires policy attention. He emphasized the need for high-level principles that can adapt to rapidly changing technological details.


Microsoft’s Brad Smith stressed the importance of building common understanding before rushing to solutions, arguing that disagreements often stem from lack of shared problem definition rather than fundamental differences. He advocated for using AI to make humans smarter rather than simply creating smarter machines, and praised the UN’s role in fostering international cooperation.


The panel discussion explored practical challenges in the science-policy interface, with experts from India, France, WHO, and Singapore sharing experiences from COVID-19 response, digital public infrastructure deployment, and national AI strategies. Key themes included the need for inclusive governance that represents diverse global perspectives, particularly from the Global South, and the importance of evidence-based policymaking that can adapt quickly to emerging risks and opportunities.


The session concluded with Singapore’s commitment to continued international collaboration, demonstrating how multilateral scientific cooperation can inform more effective and equitable AI governance globally.


Keypoints

Major Discussion Points:

Science-based AI governance framework: The establishment of the UN’s Independent International Scientific Panel on AI to provide evidence-based analysis and create shared understanding across nations, moving beyond philosophical debates to technical coordination and risk assessment.


Bridging the knowledge-policy gap: The challenge of translating rapidly evolving AI research into actionable policy, particularly given the speed of AI development outpacing traditional scientific study and policy-making timelines.


Global cooperation and inclusivity: The need for international coordination to prevent fragmented AI governance, with emphasis on ensuring developing countries and diverse voices (including women, farmers, and marginalized communities) are included in AI policy discussions.


Balancing innovation with safety: The tension between moving quickly to harness AI’s benefits for sustainable development goals while moving carefully to assess and mitigate risks, particularly around employment impacts, social effects, and potential catastrophic outcomes.


Practical implementation challenges: Moving from high-level AI principles (transparency, accountability, fairness) to operational standards, benchmarks, and technical solutions that can work across different regulatory contexts and cultural settings.


Overall Purpose:

The discussion aimed to launch and legitimize the UN’s new science-policy interface for AI governance, specifically the Independent International Scientific Panel on AI. The session sought to establish how scientific evidence can inform global AI policy-making, ensure inclusive participation from all nations and communities, and create frameworks for managing AI’s rapid development while maximizing benefits and minimizing risks.


Overall Tone:

The discussion maintained a consistently serious, collaborative, and optimistic tone throughout. Speakers emphasized urgency while remaining constructive, with a strong focus on multilateral cooperation and evidence-based decision-making. There was notable reverence for the UN’s role and a shared commitment to ensuring AI serves humanity broadly. The tone was professional and diplomatic, befitting a high-level international forum, with speakers building on each other’s points rather than expressing disagreement.


Speakers

Speakers from the provided list:


Anil Ananthaswamy – Moderator/Host, Author of “The Elegant Math Behind Machine Learning”


António Guterres – Secretary General of the United Nations


Yoshua Bengio – Professor, Scientific Director of MILA, AI researcher, member of UN Scientific Advisory Board and International AI Safety Report leadership, appointed to Independent International Scientific Panel on AI


Brad Smith – Vice Chair and President of Microsoft Corporation


Balaraman Ravindran – Professor at IIT Madras, member of International Independent Scientific Panel


Soumya Swaminathan – Former Chief Scientist at WHO (first woman chief scientist)


Ajay Sood – Principal Scientific Advisor to the Government of India


Anne Bouverot – France’s Special Envoy for AI, appointed by President Macron


Amandeep Singh Gill – Undersecretary General and Special Envoy for Digital and Emerging Technologies at the UN, moderator


Josephine Teo – Minister for Digital Development and Information of Singapore


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session report

This high-level discussion at the United Nations represented a pivotal moment in establishing science-based international AI governance, centred around the launch of the Independent International Scientific Panel on Artificial Intelligence. The session brought together global leaders from government, academia, industry, and international organisations to address the fundamental challenge articulated by moderator Amandeep Singh Gill: “We cannot govern what we do not understand.”


The Foundation: Science-Based Governance Over Speculation

Secretary-General Guterres opened with a powerful premise that shaped the entire discussion: AI governance must be grounded in facts rather than speculation, hype, or disinformation. He emphasised that “AI innovation is moving at the speed of light, outpacing our collective ability to fully understand it, let alone govern it.” This challenge is compounded by AI’s borderless nature, making unilateral national approaches insufficient.


The Secretary-General announced the confirmation of 40 experts to the new Independent International Scientific Panel on AI, designed to provide evidence-based analysis that helps countries “move from philosophical debates to technical coordination.” Crucially, he emphasised the need for “meaningful human oversight in every high-stakes decision, injustice, health care, credit” and that “responsibility is never outsourced to an algorithm.”


The panel represents a practical architecture putting science at the centre of international cooperation, with the goal of delivering its first report ahead of the Global Summit and Global Dialogue on AI Governance in July. Guterres positioned this scientific approach not as a brake on progress but as “an accelerator for solutions” that makes progress “safer, fairer, and more widely shared.”


Navigating Uncertainty: The Challenge of Acting Without Complete Knowledge

Professor Yoshua Bengio, one of the world’s most cited AI researchers and a member of the new panel, provided crucial nuance by acknowledging that AI governance faces unique challenges compared to other scientific policy areas. Unlike climate science, where there is greater scientific consensus, AI researchers themselves often disagree about future expectations and interpretations of existing evidence. Bengio drew parallels to climate tipping points, noting that “we don’t have the experience of machines that are really smart and can change society, and be even potentially smarter than us.”


This uncertainty paradox—needing to act on potentially catastrophic risks without complete certainty—emerged as a central theme. Bengio argued that even without proof of specific risks, policymakers must pay attention when potential outcomes could be catastrophic, regardless of probability. He expressed particular concern about “how this will unfold for developing countries in the global south” and emphasised the importance of ensuring “everyone is at the table and no one is on the menu.”


Dr Soumya Swaminathan, former Chief Scientist at WHO, reinforced this perspective by drawing from COVID-19 experience, where “we had to review a couple of hundred publications every day to understand what was happening.” She positioned the new AI panel as similar to the Intergovernmental Panel on Climate Change (IPCC), emphasising the need for rapid evidence processing and adaptive policymaking. Crucially, she highlighted lessons from the pandemic about ensuring recommendations are contextually relevant, noting that some WHO recommendations were applicable in high-income countries but not in low-income settings due to different contexts.


Industry Perspective: Building Common Understanding Before Solutions

Microsoft’s Brad Smith provided a unique industry perspective that strongly endorsed multilateral approaches. Drawing on his view of economic theory about humanity’s tendency to repeat mistakes every 80 years due to generational memory loss, Smith argued that humanity also risks forgetting its great successes—particularly the creation of the UN, which he called “one of humanity’s greatest accomplishments of the 20th century.”


Smith’s most significant contribution was identifying a fundamental flaw in policy discussions: people rush to debate competing solutions without establishing shared understanding of problems. “One of the reasons people so often disagree about the solution is they don’t have a common understanding of the problem,” he observed. He illustrated this point by describing how he used Microsoft Copilot to grade tech industry predictions, finding an average grade of 25%.


He also challenged the prevalent focus on building smarter machines, arguing instead for using AI “to make people smarter, to help us do what we need to do,” emphasising that human capability is “neither fixed nor finite.”


Regional Perspectives and Implementation Challenges

The panel discussion revealed significant regional variations in AI governance approaches and evidence gaps. Professor Balaraman Ravindran from IIT Madras highlighted critical knowledge gaps about AI’s social impacts in India, noting that “all of these stories are coming to us from the west, so what is it that’s happening in India?” He emphasised the need for India-specific research on AI’s effects on children, social fabric, and different cultural contexts.


Professor Ajay Sood, India’s Principal Scientific Advisor, outlined India’s approach through the National AI Governance Framework, emphasising “techno-legal” solutions that embed governance through technical design—an approach also referenced by India’s Prime Minister. Drawing from India’s successful digital public infrastructure experience, this approach aims to integrate governance mechanisms directly into system architecture.


Anne Bouverot, France’s Special Envoy for AI, brought a European perspective emphasising the importance of accurate predictions for appropriate policy responses. She demonstrated how different assessments of AI’s employment impact lead to fundamentally different policy approaches: predictions of job elimination suggest universal basic income, while predictions of job transformation point toward training and reskilling programmes. She also mentioned that Joëlle Barral would serve as France’s nominee to the scientific panel.


Singapore’s Minister Josephine Teo provided a compelling example of how smaller states can contribute meaningfully to global AI governance. Despite having only 6 million people, Singapore has invested significantly in AI R&D and established an AI safety institute. She highlighted Singapore’s role as convener of the Forum of Small States with 108 members, demonstrating that effective AI governance requires diverse participation beyond major powers.


The Challenge of Pace and Inclusivity

A recurring theme was the tension between AI’s rapid development pace and the time required for thorough scientific assessment. Bengio noted that “because it’s moving so fast there’s always going to be a lag between even like the scientific papers” and policy responses, with studies involving people taking “months” while AI capabilities continue advancing.


Swaminathan emphasised that AI governance must include “the voices of women, a low-income woman, a farmer in a remote place” who will use technology very differently from large-scale users in developed countries. This global perspective was reinforced by multiple speakers who stressed that AI’s worldwide effects require globally representative governance structures.


Operationalising Principles: From Consensus to Implementation

While speakers noted substantial convergence on high-level AI principles—transparency, accountability, fairness, and safety—the primary challenge lies in operationalisation. Minister Teo emphasised the need for “standardised evaluation methodologies that work across different regulatory contexts” and capacity building to ensure all countries can meaningfully engage with technical challenges.


Bengio advocated for high-level principles that avoid technical details since “the details are going to change,” while others emphasised adaptive frameworks that can evolve with technological development. The concept of “techno-legal” approaches, embedding governance through technical design, emerged as a promising direction for making principles operational.


Global Cooperation and Concrete Next Steps

Throughout the session, speakers consistently emphasised the UN’s unique legitimacy and inclusiveness for AI governance. Guterres argued that without common baselines, “fragmentation wins, with different regions and different countries operating under incompatible policies and technical standards.”


The session concluded with concrete commitments: Singapore will host the second International Scientific Exchange on AI Safety in May, ASEAN will develop regional AI safety benchmarks, India will continue implementing its governance framework through public-private partnerships, and Microsoft pledged support for UN efforts.


Conclusion: A Framework for Evidence-Based Governance

This discussion established a framework for AI governance that is both scientifically grounded and politically realistic. By positioning the Independent International Scientific Panel as a bridge between evidence and policy, the session created a foundation for moving beyond speculation toward fact-based governance.


The session’s ultimate message was clear: effective AI governance requires transforming uncertainty into understanding through rigorous scientific assessment, ensuring that policy decisions serve humanity’s collective interests. As Secretary-General Guterres concluded, “Less hype, less fear. More facts and evidence. Guided by science, we can transform AI from a source of uncertainty into a reliable engine for the sustainable development goals.”


Session transcript

Anil Ananthaswamy

Today’s session begins from a simple but powerful premise. We cannot govern what we do not understand. It is my honor to open this session with a special address by the Secretary General of the United Nations, whose leadership has placed science and multilateral cooperation at the forefront of global AI governance. So please join me in welcoming His Excellency Antonio Guterres.

António Guterres

Thank you very much. There is a computer here. I don’t know to whom it belongs. Excellencies, ladies and gentlemen. Thank you for joining this discussion on the role of science in international AI governance. We are barreling into the unknown. AI innovation is moving at the speed of light, outpacing our collective ability to fully understand it, let alone govern it. AI does not stop at borders, and no nation can fully grasp its implications on its own. If we want AI to serve humanity, policy cannot be built on guesswork. It cannot be built on hype or disinformation. We need facts we can trust and share across countries and across sectors. Less noise, more knowledge. That is why the United Nations is building a practical architecture that puts science at the center of international cooperation on AI.

Thank you for watching. and it starts with the Independent International Scientific Panel on Artificial Intelligence. This panel is designed to help close the AI knowledge gap and assess the real impacts of AI across economies and societies so countries at every level of AI capacity can act with the same clarity. It is fully independent, it is globally diverse, and it is multidisciplinary because AI touches every area of every society. And I’m delighted that the General Assembly of the United Nations confirmed the 40 experts I proposed to member states. Now the real work begins on a fast track to deliver a first report ahead of the Global Summit. The Global Dialogue on AI Governance in July. The panel will provide a shared baseline of analysis.

helping member states move from philosophical debates to technical coordination, and anchor choices in evidence so policy is neither a blunt instrument that stifles progress nor a bystander to harm. That is how science transcends decision -making. When we understand what systems can do and what they cannot, we can move from rough measures to smarter, risk -based guardrails. Guardrails that protect people, uphold human rights, and preserve human agency. Guardrails that build confidence and give business clarity so innovation can move faster in the right direction. Science -led governance. Governance is not a brake on progress. It is an accelerator for solutions. A way to make progress safer, fairer, and more widely shared. It helps us identify where AI can do the most good the fastest.

And it helps us anticipate impacts early, from risks for children, to labor markets, to manipulation at scale. So countries can prepare, protect, and invest in people. Today, international cooperation is difficult. Trust is strained, and technological rivalry is growing. Without a common baseline, fragmentation wins, with different regions and different countries operating under incompatible policies and technical standards. A patchwork of rules will raise costs, weaken safety, and widen divides. Science is a universal language. Guided by the independent panel and the global dialogue on AI governance, we can align with the world. We can align our technical baselines. When we agree on how to test systems and measure risk, we create interoperability. So a start -up in New Delhi can scale globally with confidence because the benchmarks are shared, and safety can travel with the technology.

Finally, let us be clear. Science informs, but humans decide. Our goal is to make human control a technical reality, not a slogan. And that requires meaningful human oversight in every high -stakes decision, injustice, health care, credit. And it requires clear accountability so responsibility is never outsourced to an algorithm. People must understand how decisions are made, challenge them, and get answers. Excellent. Thank you, ladies and gentlemen. The message is simple. Less hype, less fear. More facts and evidence. Guided by science, we can transform AI from a source of uncertainty into a reliable engine for the sustainable development goals. Let us build a future where policy is as smart as the technology it seeks to guide. Thank you.

Anil Ananthaswamy

Thank you, Secretary General, for those inspiring opening remarks. Ladies and gentlemen, we were going to have Mr. Brad Smith. Vice Chair and President of Microsoft Corporation as our next speaker, but he’s running a bit late, so we will move to the next item in the agenda. I would like to welcome Professor Yashwa Bengio to the stage, Scientific Director of MILA and one of the world’s leading AI researchers. He and I will be in a fireside chat and we’re hoping that Mr. Brad Smith will be able to join us very soon. Thank you. So, welcome Professor Bengio.

Yoshua Bengio

Thank you for having me.

Anil Ananthaswamy

Our pleasure. So, you are the most cited computer scientist And I looked it up. You’re actually the most cited living scientist today and have played a unique role at the global science policy interface, including through the UN Scientific Advisory Board and your leadership of the International AI Safety Report. So from your perspective, how do these science policy interfaces actually work in practice and where do they add the most value?

Yoshua Bengio

So it’s tricky, right, because there are many different views, especially different interests in business, in different governments. And the role of science, the role of a kind of synthesis of science that we want for the UN panel, that we have seeked for the… AI Safety Report. is to try to make it, to provide a shared understanding as a basis for those political discussions and not be influenced by as much as is humanly possible by those tensions that exist in our societies. And I think it’s particularly important because maybe unlike in the case of climate, the scientists themselves don’t always agree on what to expect for the future or even how to interpret the science that exists.

I just want to add something. So something that’s a little bit subtle about this kind of exercise is that to be able to recognize the uncertainty and the divergences that exist, and where is it that scientists agree, where is it that the evidence is strong, where is it that we have clues that matter. Even if we’re not certain about a particular risk, we might have clues about it. But if the risk has huge severity, in other words, if it does unfold, then it could be catastrophic, then policymakers need to make attention. And it’s always difficult when we don’t have proof that something terrible is going to happen. Maybe a good analogy is tipping points in climate, right?

Because there’s not enough past evidence to be sure that a particular tipping point is going to happen. So the situation is similar in AI in the sense that we don’t have the experience of, say, machines that are really smart and can change society, and be even potentially smarter than us. So how can we deal with the right policy decisions? but that’s why it is so important to have as neutral and as fact -based evaluation of what is going on available to everyone and in a language that is accessible to everyone and of course for policy makers which by the way is difficult for scientists to achieve they need help, they need iterations they need feedback from people who are used to the interface between science

Anil Ananthaswamy

Is there anything in particular about the highly technical nature of AI and also the pace of change that makes this interface particularly difficult?

Yoshua Bengio

Yes, yes The facts shown in the scientific benchmarks across labs, companies and academia on AI show very rapid growth of the capabilities of these systems and the capabilities of AI and the capabilities of AI and the capabilities of AI and the capabilities of AI and the capabilities of AI and that growth is uneven so we see AIs even surpassing most people on some measurements of capability and being kind of stupid or like a six -year -old on some other things so it’s very difficult to grasp what that means but because it’s moving so fast there’s always going to be a lag between even like the scientific papers take time to be written if there are studies so think about studies that involve people they’re going to take months and so by the time we start seeing clues that there’s a potential problem so you can think of something recent that was not expected like the psychological effects on people of these chatbots we now have lots of anecdotal evidence and we’re only starting to see the scientific studies and of course on the policy side it’s going to be even more difficult even later because those discussions are going to happen after we see scientific evidence so there is going to be a lag and that’s a real problem because things could move

Anil Ananthaswamy

So maybe that leads well into our next question. We often hear that AI governance is moving too slowly and from your experience, what kinds of scientific assessments or benchmarks could realistically keep pace with this rapid change?

Yoshua Bengio

Yeah, that’s a great question. My opinion on this is that we should be thinking about not just policy and the usual sense of coming up with principles, but we should try to strive for high -level principles that can be applied without having to go into the details because the details are going to change. And the second thing is I think we should strive for technology that are going to help to implement those guardrails in the field, in the deployment of AI, because otherwise there’s not enough time to

Anil Ananthaswamy

Well, thank you for those insights. And also congratulations on your recent appointment to the Independent International Scientific Panel on AI. In a few words, how do you see this new panel helping to strengthen the link between science and global AI policymaking?

Yoshua Bengio

Well, I think there’s something really important about this panel, and it’s global aspect and being rooted in the UN. And the reason I’m saying this is that AI is going to be transforming our world very clearly, and it’s going to have global effects, whether it is on the good side, the benefits are on the risks, but also the kind of power relationships that are going to be changing in the future. And I’m personally very concerned about how this will unfold for developing countries in the global south. And we need to work. Thank you. in a multidisciplinary array so that we can foresee those effects and we can start discussions to make sure that everyone is at the table and no one is on the menu.

Anil Ananthaswamy

Well said, Professor Bengio. Well, thank you very much for kick -starting our discussion. We will now turn to our panel. So, ladies and gentlemen, it is essential that discussions about AI policy include the voices of key industry actors, and I am pleased to invite Mr. Brad Smith, Vice Chair and President, Microsoft Corporation, for his keynote address.

Brad Smith

Well, good morning, everyone. It’s a pleasure to be here. My apologies for being a few minutes late. I want to offer a couple of thoughts this morning. The first thing I think we should come together to think about is that, in my opinion, this is a moment in time when we need to reflect on and reinvest in the importance of the United Nations. There is a well -known economic theory that says that humanity is, in many ways, almost destined to repeat its great economic mistakes every 80 years. The reason it’s 80 years is because that is basically the lifespan of human beings. And so every 80 years, almost everyone who had any living memory of a prior financial calamity has left the planet.

If you look at the Great Recession that started in 2008, what you realize is that it happened 79 years after the stock market crash that led to the Great Depression in 1929. And you can follow this series of financial mistakes all the way back to the bursting of the tulip bubble in the Netherlands hundreds of years ago. I think there is a corollary worth thinking about. Just as there is a risk that humanity forgets the mistakes it made 80 years ago, humanity runs the risk of forgetting the great successes. it created 80 years ago. It was just over 80 years ago that the world came together to create the United Nations. It was, in my opinion, one of humanity’s greatest accomplishments of the 20th century.

It is a unique organization in a very imperfect world. And so, of course, on any day and any year, it is possible for anyone to blame the United Nations for the imperfections that we see all around us. But the truth is this. Those imperfections are fewer, and their consequences are less disastrous, in my view, because of the United Nations. And one of the great things about working at Microsoft in a job like Microsoft, in my opinion, is that I get to work in a global organization. We have subsidiaries in 120 countries. We do work in 190 countries. We see the world. It turns out that everywhere we go, we see the United Nations. Sometimes it’s the United Nations Development Program, working to foster economic development.

Sometimes it is UNHCR, helping refugees. Sometimes it is the UN Office of Human Rights, seeking to protect human rights. But the truth is, if there’s a problem, the United Nations is almost always part of the solution. We need to remember this. And we need to remember that however challenging the last 80 years have been, we have managed, as humanity, as a species, to live. with the ever -constant presence of nuclear weapons without using them or destroying ourselves. The United Nations has, in fact, in my view, been indispensable to not just the protection of people, but the preservation of our species. Why does that matter now? Why should we talk about it today and this week in Delhi?

Well, because here we are on the cusp of the future. A technology that we all know will likely change the future. Here we are in the second month of the second quarter of the 21st century, and we need to focus on how we bring the institutions on which we rely into that future. So then let me talk about a second aspect that I think is so important to think about this month. One of the things I’m constantly struck by… leading a global organization is how often everyone disagrees with each other about almost everything. But one of the things I’ve learned along the way is that I think one of the reasons people so quickly disagree is that we rush so quickly to debate competing solutions.

This happens in domestic politics. It happens in international diplomacy. It, frankly, happens in a global company. It actually happens everywhere, even in families. As soon as there’s a problem, people want to talk about the solution. And then people have different solutions, and then they debate, and they disagree, and they argue, and sometimes it’s even worse than that. One of the things I’ve learned is the reason people so often disagree about the solution is they don’t have a common understanding of the problem. They don’t spend enough time talking about the problem. They don’t have a shared contextual understanding. of the problem they’re trying to solve. They’re too quick to want to blame someone for the problem, and then that spirals into a discussion that becomes completely unconstructive.

Why does that matter today? Because what we’re here to talk about today is all about creating a more common understanding together based on science of where artificial intelligence is going. This is an indispensable tool. Indeed, it’s a critical service for humanity so we can all learn together, we can all think together, we can all understand together what is going on in the world. I think it’s especially critical, to be honest, when it comes to artificial intelligence because I think if you even communicate, consider most of the conversations you have about this technology. I would argue that it has two flaws. The first flaw is it usually involves people making very grandiose predictions about the future.

You know what? I’ve worked in the tech sector for 32 years. I have listened for more than three decades to my colleagues in my industry around the world make bold predictions about the future. No one ever holds them accountable a decade later for whether they were right or wrong. I used the researcher agent in Microsoft Copilot a couple weekends ago, and I loaded a lot of names. I won’t say whom, but you can guess. And I said, look at all the predictions they made about all the technologies, and look at the predictions they made about when these technologies would come to do something or another, and give them a grade. The average grade was 25%.

You couldn’t even get close to the top. You were at the bottom. So let’s just understand one thing together. There is no such thing as a crystal ball. No one has one. But what we do have is the ability to understand where we are today. And what we do have is a better understanding to just appreciate what is happening each and every year. There is a second flaw, in my view, in many of the conversations that take place, including at this AI summit. Everybody wants to talk about how they’re going to make machines smarter. That’s interesting. I think it’s interesting to imagine living in a world where a data center is like a country of geniuses.

But as I mentioned yesterday, compared to the people who lived in the Bronze Age, we’re all geniuses. We’re all geniuses already. What that should remind us… is that human capability is neither fixed nor finite. And so what really matters, in my opinion, is not whether we are going to build machines that are smarter than humans. Yes, in some ways we will. But how will we use those machines to make people smarter, to help us do what we need to do? That is what this effort is all about. Wow. Let’s harness the power of science to build a common understanding of what is changing each year, and then let’s connect it with the global dialogue on governance so we can pursue policies that will ensure that this technology serves people.

There’s no better place to get started than here. There’s no better time than now. And let’s face it, there is no better institution on the planet that can do more to serve humanity and protect the world. than the United Nations. And on behalf of Microsoft, I just want you to know we are putting our full energy and resources to do everything that we can to help. Thank you very much.

Anil Ananthaswamy

Thank you. Thank you, Mr. Smith, for those insights on responsibility, accountability, and the role of industry. We now turn to our panel. Our panel brings together scientific leadership, public policy expertise, and international coordination. Please welcome to the stage our speakers, Professor Balaraman Ravindran, IIT Madras, Swaminathan, former Chief Scientist, WHO, Ajay Kumar Sood, Principal Scientific Advisor to the Government of India, and Anne Bouveraud, France’s Special Envoy for AI. I am also pleased to introduce our moderator, Amandeep Singh Gill, Undersecretary General and Special Envoy for Digital and Emerging Technologies. I invite him to guide the discussion. Thank you very much.

Amandeep Singh Gill

Thank you very much. Thank you, Anil, for leading us and for those who have not read his book, The Elegant Math Behind Machine Learning, please do have a go at it. We cannot govern something that is not possible. Something that we don’t understand. So something as simple as, like, if 90 % of AI is matrix multiplication, a 0 .01 % as he was explaining, improvement in efficiency of matrix multiplication has huge energy implications. So I want to welcome our esteemed panelists. The stage has been set by very inspiring keynotes and a fireside chat. So we will dive straight in. And since we are running a little short of time, I’m going to compress the two rounds into one rapid -fire round.

So all of you have worked on or are working on the science policy interface. And my sense is that there is a loop here, that there is a loop between science and evidence, and evidence and… and policy. And we want to explore that loop today in the context of the significant development of the setting up of the International Independent Scientific Panel at the United Nations. So I want to start with you, Soumya. You were the first chief scientist, first woman chief scientist at the WHO and worked at a very difficult time during the COVID when evidence, trusted evidence was so critical. So in your view, what makes this evidence that comes from science trusted and actionable for policymakers?

Soumya Swaminathan

The evidence is very rapid. The field is moving so rapidly. In COVID, we had to review a couple of hundred publications every day to understand what was happening on different aspects, on the virus, on the immunology, on how vaccines were working and drugs, and we had to make recommendations based on the best available evidence that day. I think we may be in a similar situation with AI, and it’s wonderful that the UN has now set up this body, which I see as something like the IPCC. I think we do need global governance. We need something like, you know, we’re talking now about preventing future pandemics by sharing data on pathogens, making sure that we have protocols in place where countries are willing to share that data, and also, of course, to share the tools, the vaccines or drugs when they become available, when or in case there is another pandemic.

Similarly, I hope that this scientific body that’s been set up by the UN would also establish systems that would, would link to national bodies and systems, and that would ensure the voices of all are heard. So one of the things during COVID was some of our recommendations were relevant. in high -income countries but not in low -income countries because the context is very different. And the WHO was criticized for this, I think rightfully so, and we need to learn from those mistakes. So it’s the voices, for example, of women, a low -income woman, a farmer in a remote place, is going to use technology very differently from a large farmer with access to lots of machines in Europe or North America.

So if AI has to work for everyone, then we need to make sure that those voices are heard. And ultimately, I think that loop you talked about, sometimes policy is made in advance of evidence. You have to. You can’t wait. But the policy must change. It must ask for the relevant evidence and be able to adapt when that is clear.

Amandeep Singh Gill

Thank you very much, Soumya. I’m going to come to you, Ravi, Professor Balaraman Rabindran. Now, as AI policies begin to take shape and you’ve been involved in some policymaking yourself, what signals from… regulators or public sector users should most urgently guide future AI research priorities? So in a sense, you know, the loop coming back into research.

Balaraman Ravindran

so thank you for that question so I mean AI right now especially in the global south so we don’t completely understand the implications of adopting AI and how is it going to affect the society, the people livelihood and everything in fact I also feel that we don’t have enough evidence about how AI is even affecting the social fabric how are children getting increasingly isolated with the adoption of AI and whether the effect is uniform between cities and rural India because the cultural setup is very different and so on and so forth so if the government as we heard our honourable prime minister say yesterday should focus more on youth and the impact of AI on youth what is the evidence do we have about what is happening in India so we hear stories about you know how there is dependence of on AI models of children and also people who are mentally challenged and so on and so forth who are under stress but all of these stories are coming to us from the west so what is it that’s happening in India so when we have these kinds of policy decisions that have to be made the government says that AI should be pushing efficiency in agriculture so do we have a benchmark in India that can evaluate the efficiency of effectiveness of these AI models in agriculture what are the kinds of flaws that happens when I for example build a bot that can act as a co -pilot for a farmer so these are bigger challenges so we have a lot of questions

Amandeep Singh Gill

if I can quickly follow up where do you actually see evidence for impact in the sustainable development goals space just a quick example or two

Balaraman Ravindran

so I I That was not in the notes he gave us earlier, so I have to think on my feet here. So let me take one thing that we are very familiar with, we are working on right now, is on the education space, right? So, for example, we don’t know, we don’t have evidence of AI interventions. How likely is it to change student learning behavior? So we have done some preliminary studies. So the author of the study is somewhere in the audience, because he has been sending me pictures of the stage. So what we have found out is the effectiveness of AI adoption is a direct function of habit. So if the students are using AI more, then they tend to…

But now I don’t know what is the causal factor there. I don’t know if the causal factor is whether they are using AI more, therefore they get better effect, or do they use AI more because they are getting better effect. So these are questions that we have to ask. Even in something as simple as education. I am saying simple because there is a lot of positive buzz around using AI in education. But even there, we need a lot more evidence to come.

Amandeep Singh Gill

Thank you, Ravi, and we’re honored to have you on the new International Independent Scientific Panel. So if I may jump to you, Anne, and you’re an AI scientist yourself. You know, all of us know you as a special envoy of President Macron, who made the February summit happen last year in Paris, but you’re also an AI scientist. So from your perspective, you kind of lived in these two worlds. So what works best for the interface? What kind of scientific evidence would you take to President Macron if you were to convince him to change the policy?

Anne Bouverot

Well, thank you for the question. I studied AI a long time ago, but I’m not really a scientist. But I try to understand, of course. Understanding, I think, is probably the very first thing. And before we move to policymakers, I think it’s for citizens, for us as human beings. The things that we don’t understand… We tend to be more afraid of. I often quote scientist Marie Curie. She wasn’t an AI scientist, but she’s one of the brightest scientists that we’ve had, two times Nobel laureate. And there’s a wonderful quote by her. She says, nothing in life is to be feared. Everything is to be understood. And now is the time to understand more because, of course, there are more things we can be afraid of at the time when she was living and now as well.

So trying to understand things, having scientific panels is definitely the right thing to do. And we’re fully supportive in France of the scientific panel. We’re very proud that Joëlle Barral is our nominee. She’s a scientist in AI and health and a member of the panel. This is absolutely excellent. So, yes, understanding things. is absolutely key. And then maybe just a second point to give an example of how understanding something or not can lead to very different policy decisions in the field of AI and work. We’ve had predictions. I remember in 2013, that was the previous AI revolution, but scientists, I believe, at Oxford said within 10 years, half of the jobs will disappear. We haven’t seen that.

At the AI summit in Bletchley Park, for very good reasons, we had frontier AI leaders in particular, Elon Musk saying within two years, half of the jobs will disappear. So, of course, the fact that this didn’t happen doesn’t mean that there isn’t a risk for work. Of course, there’s a risk for work. But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism. Basic income, what are we going to do with all the people who don’t have jobs? If what economists are saying is that 80 % of the jobs will be transformed, then the policy outcome is training, skilling, reskilling, and helping to educate people. That’s why listening to economists and having the International Labor Organization and other institutions really follow closely what is happening in which countries for younger people, for older people, for women, for men, for different types of jobs, that’s super

Amandeep Singh Gill

Merci beaucoup, Anne. Merci. And I’m going to turn to you, Professor Sood. You occupy an important position within the Indian system, and you look at science broadly. And India has deployed some of these technologies at societal scale. India stack the digital public infrastructure. So how do you look at the AI opportunity, and importantly, how do you look at AI risks? And how are you prioritizing R &D allocations to harness the opportunities, manage the risks?

Ajay Sood

Thank you very much for having me on the panel. As you know that all the aspects which you asked, we have had very extensive consultations across all stakeholders. And we came out with the National AI Governance Framework, not the regulatory framework, but how do we really handle AI, all aspects. And there we have looked at how do we enable the compute facility, compute resources to our people. Because we are not at the scale when a few trillions of dollars are being invested. So we came out with some framework which we think with public -private partnership we could enable it. And we could see the results of that within a year as demonstrated in AI Summit.

Summit, the release of AI, so on, models and so on. Other aspect which is very important, as you rightly said, the risk assessment. So this is where, as has been mentioned, our experience with the digital public infrastructure, which has been rolled on a very public scale with the safety and security, which is as difficult as in AI. AI, of course, is more difficult. We still do not know the risks. But when we were dealing with the digital public infrastructure, either for the financial transactions or for identity, identity verification and so on, it was a challenge. And that was done by embedding governance through technical design. And this is what we call techno legal, which Honorable Prime Minister said in the Paris summit.

And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everything is laid out. We will need framework for that. We will need technologies for that. But this is one way which will have a smooth. interaction if we can bring this technological framework.

Amandeep Singh Gill

Thank you so much for those insights. And now that since we are running out of time and I’m going to discriminate against the men on the panel so my apologies in advance. So I’m going to turn back to you Soumya and Aan for like 40 second, 30 second reflection. What do you think in terms of the pace and direction of technology opportunities including for accelerating scientific discovery and risk. What would be your advice for the international independent scientific panel maybe Anne you can go first 40 seconds.

Anne Bouverot

Yes I think AI has a strong potential for helping science we’ve seen that with the two Nobel prizes in physics and chemistry a year back. There’s many more areas in science where AI can help. It can only be possible if we have databases of scientific data that are available to the world and that are constructed by scientists and funded by governments and international institutions around the world. So this is a very important topic for research.

Amandeep Singh Gill

Thank you, Anne. Soumya, you have the last one.

Soumya Swaminathan

Yes, I agree very much with Anne. And I think that the scientific panel could actually help network many more groups of scientists from around the world, perhaps sectorally, for example, what’s happening in health, what’s happening in education, what’s happening in agriculture, looking at the evidences as they emerge, encouraging research, setting priorities, but also looking at safety and risks, because I think that’s going to be very important. There may be unanticipated risks and harms that we have not considered. And, of course, equity, being a UN -led panel, ensuring that equity is at the heart of AI and it’s being done for public good.

Amandeep Singh Gill

fantastic thank you that’s a great closing ladies and gentlemen please join me in thanking our outstanding panel and we are going to move straight to the closing over to you Anil

Anil Ananthaswamy

thank you to the panel for a rich and forward looking discussion to close this session it is my honor to invite Josephine Teal minister for digital development and information of Singapore to deliver the closing remarks minister Josephine Teal

Josephine Teo

good morning everyone first allow me to thank the secretary general for his remarks and it serves as a very useful guidance to all of us working in this important technology for the closing this morning I thought that it would perhaps be useful to offer a perspective from a small state Singapore has a population of just 6 million people and more than 30 years ago at the UN we became the convener of the Forum of Small States which still has about 108 members I will just make three points on how we look at developments on this front The first point is that we believe in AI being used as a force for the public good but to do so, it is important that we continue to invest in the science that underpins it and ground trust in evidence This certainly requires sustained investment in research and is also the reason why we set aside a billion dollars in a national AI R &D plan which will include foundational and applied research into responsible AI We believe in it and we have to put money behind this effort There are of course other investments such as in building up a digital trust center.

It’s our designated AI safety institute that has been participating in important conversations on this topic, as well as setting up a center for advanced technologies in online safety. So those are just some of the efforts that we can dedicate resources to doing as a small state. The second point I want to make is that there is almost always going to be a tension between moving quickly, given the pace of AI development, and moving carefully, giving the latest evidence that presents themselves on what we should be paying attention to. Both impulses are necessary, and we believe it is not impossible to try and balance them through integration of science and policy. It is not easy, but it is not an effort that we must give up on.

I should just add that on this score, it will be much better if we can cooperate internationally to develop sound approaches that can also be interoperable across different jurisdictions. And this is one effort that we believe underpins the work that is being carried out by the UN. And this brings me to my third point. I want to highlight the important role that an organisation like the United Nations plays in facilitating global discourse to bridge science and policy. I cannot overemphasise the importance of this effort. We must recognise that global AI governance landscape is becoming increasingly fragmented. There are multiple initiatives, frameworks and institutions. The UN’s unique value lies in your legitimacy and inclusiveness to encourage interoperability across efforts.

The Secretary -General talked about this too. We therefore welcome… We welcome the establishment of the… independent international scientific panel on AI, building on the work of the UN High -Level Advisory Body on AI, which published its report on governing AI for humanity at the end of 2024. We note that the panel’s multidisciplinary approach, covering machine learning, applied AI, social science, ethics, all of these are necessary to address the complexity of AI governance challenges. Finally, I would just like to acknowledge that we now have substantial convergence on the high -level AI principles. Yoshua talked about this. Transparency, accountability, fairness, safety. But the challenge is in operationalizing them. We need to find standardized evaluation methodologies that work across different regulatory contexts.

We need capacity building so that all countries can meaningfully engage with the technical and the technical challenges. We need to work with the technical evidence and not just with the large AI research ecosystems. I would encourage all stakeholders to view scientific input not as a constraint on policy flexibility, but as a constraint on policy flexibility. as a foundation for more durable, effective governance that can maintain public trust. We need to keep the conversations going, one where science informs governance, and governance sharpens science. I would just perhaps end by highlighting Singapore’s continued commitment to contribute to advancing these discussions. We were very fortunate to host the International Scientific Exchange on AI Safety and to bring about the Singapore Consensus on Global AI Safety Research Priorities.

Joshua was in Singapore for this very momentous event. We will continue to participate in joint testing efforts of the International Network for Advanced AI Measurement, Evaluation and Science. We have organized two editions of the Singapore AI Safety Red Teaming Challenge, the first multicultural and multilingual AI safety red teaming exercise focused on the Asia -Pacific region. And as chair of the ASEAN Work Group on AI Governance, we have actively spearheaded efforts to foster a trusted environment in ASEAN by adapting global norms and best practices for ASEAN and in bringing about regional harmonization through the ASEAN Guide on AI Governance and Ethics, as well as expanding it to address the risk in generative AI. We are now working within ASEAN to explore practical tools for AI safety testing and aim to collectively develop a set of AI safety benchmarks that reflect our region’s concerns.

And finally, I’d like to welcome all colleagues to join us in Singapore for the second edition of the International Scientific Exchange, which we expect to take place on the 17th and 18th of May, and we look forward to furthering

Anil Ananthaswamy

Thank you very much once again. Thank you, Mr. Teo, for your closing remarks. This session is now concluded. Thank you very much. Thank you.

A

António Guterres

Speech speed

110 words per minute

Speech length

653 words

Speech time

353 seconds

Science‑centered architecture for AI governance

Explanation

The Secretary‑General says the United Nations is constructing a practical framework that places scientific expertise at the core of global AI cooperation, turning AI from an uncertain risk into a reliable tool for the Sustainable Development Goals.


Evidence

“That is why the United Nations is building a practical architecture that puts science at the center of international cooperation on AI.” [2]. “Guided by science, we can transform AI from a source of uncertainty into a reliable engine for the sustainable development goals.” [4].


Major discussion point

Science as the foundation for AI governance


Topics

Artificial intelligence


Human control as a technical reality

Explanation

Guterres stresses that human oversight must be built into AI systems as a concrete technical capability, not merely a slogan, and that clear accountability is essential so responsibility cannot be delegated to algorithms.


Evidence

“Our goal is to make human control a technical reality, not a slogan.” [120]. “And it requires clear accountability so responsibility is never outsourced to an algorithm.” [121].


Major discussion point

Trust, accountability, and human oversight


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Y

Yoshua Bengio

Speech speed

141 words per minute

Speech length

828 words

Speech time

351 seconds

Neutral, fact‑based synthesis for policy

Explanation

Bengio argues that policy makers need a neutral, fact‑based evaluation of AI developments that is accessible to all, helping to distinguish where scientific consensus exists from where uncertainty remains.


Evidence

“but that’s why it is so important to have as neutral and as fact‑based evaluation of what is going on available to everyone and in a language that is accessible to everyone and of course for policy makers which by the way is difficult for scientists to achieve they need help, they need iterations they need feedback from people who are used to the interface between science” [29]. “So something that’s a little bit subtle about this kind of exercise is that to be able to recognize the uncertainty and the divergences that exist, and where is it that scientists agree, where is it that the evidence is strong, where is it that we have clues that matter.” [39].


Major discussion point

Science as the foundation for AI governance


Topics

Artificial intelligence | Monitoring and measurement


Fast AI progress outpaces policy; need high‑level principles

Explanation

Bengio points out that AI capabilities are advancing faster than research publications and policy cycles, creating a lag that makes high‑level, technology‑agnostic principles essential for governing the technology.


Evidence

“Yes, yes The facts shown in the scientific benchmarks across labs, companies and academia on AI show very rapid growth of the capabilities of these systems… it’s very difficult to grasp what that means but because it’s moving so fast there’s always going to be a lag between even like the scientific papers take time to be written… there is going to be a lag and that’s a real problem because things could move” [45]. “My opinion on this is that we should be thinking about not just policy and the usual sense of coming up with principles, but we should try to strive for high‑level principles that can be applied without having to go into the details because the details are going to change.” [63].


Major discussion point

Rapid AI development creates policy lag and the need for high‑level principles


Topics

Artificial intelligence | Monitoring and measurement


B

Brad Smith

Speech speed

132 words per minute

Speech length

1339 words

Speech time

606 seconds

Common problem definition is missing

Explanation

Smith observes that disagreement over AI solutions stems from a lack of shared understanding of the problem itself, and calls for a science‑based common understanding to enable constructive policy discussions.


Evidence

“One of the things I’ve learned is the reason people so often disagree about the solution is they don’t have a common understanding of the problem.” [66]. “Because what we’re here to talk about today is all about creating a more common understanding together based on science of where artificial intelligence is going.” [71].


Major discussion point

Rapid AI development creates policy lag and the need for high‑level principles


Topics

Artificial intelligence | Capacity development


A

Amandeep Singh Gill

Speech speed

136 words per minute

Speech length

644 words

Speech time

283 seconds

Trusted scientific evidence for policymakers

Explanation

Gill asks what makes scientific evidence trustworthy and actionable for decision‑makers and highlights the feedback loop between science, evidence, and policy as essential for effective AI governance.


Evidence

“So in your view, what makes this evidence that comes from science trusted and actionable for policymakers?” [30]. “And my sense is that there is a loop here, that there is a loop between science and evidence, and evidence and… and policy.” [32].


Major discussion point

Science as the foundation for AI governance


Topics

Artificial intelligence | Monitoring and measurement


S

Soumya Swaminathan

Speech speed

180 words per minute

Speech length

451 words

Speech time

149 seconds

UN scientific panel akin to IPCC

Explanation

Swaminathan likens the new UN AI scientific panel to the IPCC, emphasizing its role in aggregating global expertise across health, education and agriculture, and stresses that equity must be central to its work.


Evidence

“I think we may be in a similar situation with AI, and it’s wonderful that the UN has now set up this body, which I see as something like the IPCC.” [41]. “And, of course, equity, being a UN‑led panel, ensuring that equity is at the heart of AI and it’s being done for public good.” [52].


Major discussion point

Science as the foundation for AI governance


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Policy must adapt continuously, like COVID‑19 response

Explanation

Drawing on the COVID‑19 experience, Swaminathan argues that AI policy should be updated daily based on the best available evidence, requiring mechanisms to request and incorporate new data swiftly.


Evidence

“In COVID, we had to review a couple of hundred publications every day to understand what was happening on different aspects, on the virus, on the immunology, on how vaccines were working and drugs, and we had to make recommendations based on the best available evidence that day.” [80]. “It must ask for the relevant evidence and be able to adapt when that is clear.” [79].


Major discussion point

Rapid AI development creates policy lag and the need for high‑level principles


Topics

Artificial intelligence | Monitoring and measurement


B

Balaraman Ravindran

Speech speed

169 words per minute

Speech length

483 words

Speech time

171 seconds

Need concrete evidence on AI’s social effects in the Global South

Explanation

Ravindran stresses the lack of data on how AI impacts youth, agriculture, education and urban‑rural dynamics in India and calls for rigorous evidence to guide policy decisions.


Evidence

“so thank you for that question so I mean AI right now especially in the global south so we don’t completely understand the implications of adopting AI and how is it going to affect the society… we don’t have enough evidence about how AI is even affecting the social fabric… what is the evidence do we have about what is happening in India…” [86]. “And I’m personally very concerned about how this will unfold for developing countries in the global south.” [90].


Major discussion point

Inclusive, equitable, and global collaboration


Topics

Artificial intelligence | Social and economic development


Establish benchmarks for AI impact in sectors

Explanation

He notes the buzz around AI in education and agriculture but points out that systematic benchmarks are still missing, underscoring the need for evidence‑based evaluation tools.


Evidence

“I am saying simple because there is a lot of positive buzz around using AI in education.” [89]. “But even there, we need a lot more evidence to come.” [83].


Major discussion point

Operationalizing AI principles, standards, and benchmarks


Topics

Artificial intelligence | Monitoring and measurement


A

Anne Bouverot

Speech speed

142 words per minute

Speech length

501 words

Speech time

211 seconds

Scientific panels reduce fear and inform policy

Explanation

Bouverot argues that having scientific panels helps societies understand AI better, thereby lowering anxiety and providing a solid basis for policy making.


Evidence

“So trying to understand things, having scientific panels is definitely the right thing to do.” [48].


Major discussion point

Science as the foundation for AI governance


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Job‑loss scenarios demand evidence‑based social policies

Explanation

She points out that predictions of massive job displacement require policies such as universal basic income, reskilling and lifelong learning, grounded in solid labour‑market evidence.


Evidence

“But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism.” [113]. “If what economists are saying is that 80 % of the jobs will be transformed, then the policy outcome is training, skilling, reskilling, and helping to educate people.” [114].


Major discussion point

Operationalizing AI principles, standards, and benchmarks


Topics

Artificial intelligence | The digital economy | Human rights and the ethical dimensions of the information society


A

Ajay Sood

Speech speed

138 words per minute

Speech length

304 words

Speech time

131 seconds

Public‑private partnership and techno‑legal design embed governance

Explanation

Sood describes a framework that combines public‑private collaboration with technical design choices that bake governance mechanisms directly into AI systems.


Evidence

“So we came out with some framework which we think with public -private partnership we could enable it.” [104]. “And that was done by embedding governance through technical design.” [109].


Major discussion point

Role of industry and techno‑legal frameworks


Topics

Artificial intelligence | Financial mechanisms | The enabling environment for digital development


J

Josephine Teo

Speech speed

140 words per minute

Speech length

901 words

Speech time

385 seconds

UN legitimacy and inclusiveness bridge fragmented AI governance

Explanation

Teo highlights the United Nations’ unique legitimacy and inclusive mandate as essential for fostering interoperability and coordination among diverse AI governance efforts.


Evidence

“The UN’s unique value lies in your legitimacy and inclusiveness to encourage interoperability across efforts.” [51]. “We note that the panel’s multidisciplinary approach, covering machine learning, applied AI, social science, ethics, all of these are necessary to address the complexity of AI governance challenges.” [9].


Major discussion point

Inclusive, equitable, and global collaboration


Topics

Artificial intelligence | Capacity development


Standardized evaluation methodologies and capacity building

Explanation

She calls for common evaluation methods that work across jurisdictions and for capacity‑building programmes so all countries can engage with technical AI challenges.


Evidence

“We need to find standardized evaluation methodologies that work across different regulatory contexts.” [101]. “We need capacity building so that all countries can meaningfully engage with the technical and the technical challenges.” [103].


Major discussion point

Operationalizing AI principles, standards, and benchmarks


Topics

Artificial intelligence | Monitoring and measurement | Capacity development


A

Anil Ananthaswamy

Speech speed

53 words per minute

Speech length

568 words

Speech time

640 seconds

Rapid AI development creates policy lag; need realistic benchmarks

Explanation

Ananthaswamy notes that AI governance is perceived as too slow and asks what scientific assessments could keep up with the fast‑moving technology, highlighting the inherent lag between research and policy.


Evidence

“We often hear that AI governance is moving too slowly and from your experience, what kinds of scientific assessments or benchmarks could realistically keep pace with this rapid change?” [8]. “…the facts shown in the scientific benchmarks across labs, companies and academia on AI show very rapid growth… there is going to be a lag and that’s a real problem because things could move” [45].


Major discussion point

Rapid AI development creates policy lag and the need for high‑level principles


Topics

Artificial intelligence | Monitoring and measurement


Agreements

Agreement points

Science must be central to AI governance and policy-making

Speakers

– António Guterres
– Yoshua Bengio
– Brad Smith
– Soumya Swaminathan
– Anne Bouverot
– Josephine Teo
– Anil Ananthaswamy

Arguments

Science must be at the center of international AI cooperation to close knowledge gaps and provide shared baseline analysis


Scientific panels should provide neutral, fact-based evaluation accessible to everyone, especially policymakers


People often disagree on solutions because they lack common understanding of problems; science helps create shared contextual understanding


Evidence-based governance requires rapid review and adaptation as the field moves quickly, similar to COVID response


Understanding AI through scientific evidence is essential before making policy decisions, as fear comes from lack of understanding


Scientific input should be viewed as a foundation for effective governance rather than a constraint on policy flexibility


We cannot govern what we do not understand – understanding technical details is crucial for effective AI governance


Summary

All speakers strongly agreed that scientific evidence and understanding must form the foundation of AI governance, moving away from hype, guesswork, and fear-based approaches toward fact-based policy-making


Topics

Artificial intelligence


The UN’s unique role and legitimacy in global AI governance

Speakers

– António Guterres
– Brad Smith
– Soumya Swaminathan
– Josephine Teo

Arguments

AI does not stop at borders and requires global cooperation; the UN provides unique legitimacy and inclusiveness for AI governance


The UN represents one of humanity’s greatest accomplishments and remains indispensable for global problem-solving


Global governance is needed with systems linking national bodies to ensure all voices are heard, especially from developing countries


The UN’s role is crucial in facilitating global discourse and preventing fragmentation in AI governance landscape


Summary

Speakers unanimously supported the UN’s central role in AI governance, emphasizing its unique legitimacy, global reach, and ability to prevent fragmentation while ensuring inclusive participation


Topics

Artificial intelligence


The Independent International Scientific Panel on AI is a crucial development

Speakers

– António Guterres
– Yoshua Bengio
– Soumya Swaminathan
– Josephine Teo

Arguments

The panel will provide shared baseline analysis to help countries move from philosophical debates to technical coordination


The panel’s global and multidisciplinary aspect rooted in the UN is important for addressing power relationships and effects on developing countries


The panel should network scientists globally, set research priorities, and ensure equity is at the heart of AI development


The panel’s multidisciplinary approach covering machine learning, social science, and ethics is necessary for complex AI governance challenges


Summary

All speakers viewed the establishment of the Independent International Scientific Panel on AI as a significant and necessary step for global AI governance, emphasizing its multidisciplinary nature and global scope


Topics

Artificial intelligence


AI development pace creates governance challenges requiring rapid adaptation

Speakers

– António Guterres
– Yoshua Bengio
– Soumya Swaminathan
– Josephine Teo

Arguments

AI innovation moves at light speed, outpacing collective ability to understand and govern it


The rapid and uneven growth of AI capabilities creates difficulty in grasping implications, with significant lag between scientific evidence and policy


Evidence-based governance requires rapid review and adaptation as the field moves quickly, similar to COVID response


There’s tension between moving quickly given AI’s pace and moving carefully based on evidence, but both impulses are necessary


Summary

Speakers agreed that the unprecedented pace of AI development creates fundamental challenges for governance, requiring new approaches that can adapt quickly while maintaining evidence-based decision-making


Topics

Artificial intelligence


Need for inclusive global participation, especially from developing countries

Speakers

– Yoshua Bengio
– Soumya Swaminathan
– Anne Bouverot
– Balaraman Ravindran

Arguments

The panel’s global and multidisciplinary aspect rooted in the UN is important for addressing power relationships and effects on developing countries


Global governance is needed with systems linking national bodies to ensure all voices are heard, especially from developing countries


Different predictions about job displacement lead to different policy outcomes – universal basic income versus training and reskilling


We lack sufficient evidence about AI’s effects on social fabric, children, and different cultural contexts in countries like India


Summary

Speakers emphasized the critical importance of ensuring that AI governance includes diverse global perspectives, particularly from developing countries and the Global South, recognizing that AI impacts vary significantly across different contexts


Topics

Artificial intelligence | Closing all digital divides


Similar viewpoints

Both emphasized the need to move beyond hype and speculation toward evidence-based understanding as the foundation for effective policy-making

Speakers

– António Guterres
– Brad Smith

Arguments

Policy cannot be built on guesswork or hype but needs facts that can be trusted and shared across countries


People often disagree on solutions because they lack common understanding of problems; science helps create shared contextual understanding


Topics

Artificial intelligence


Both drew parallels between AI governance challenges and pandemic response, emphasizing the need for rapid evidence processing and policy adaptation in fast-moving technological contexts

Speakers

– Yoshua Bengio
– Soumya Swaminathan

Arguments

The rapid and uneven growth of AI capabilities creates difficulty in grasping implications, with significant lag between scientific evidence and policy


Evidence-based governance requires rapid review and adaptation as the field moves quickly, similar to COVID response


Topics

Artificial intelligence


Both emphasized that AI should serve humanity and enhance human capabilities rather than simply pursuing technological advancement for its own sake

Speakers

– Brad Smith
– António Guterres

Arguments

The goal should be using machines to make people smarter rather than just building smarter machines


Science-led governance accelerates solutions and helps identify where AI can do the most good fastest


Topics

Artificial intelligence


Both highlighted the need for context-specific evidence and governance approaches that account for different cultural, economic, and social contexts, particularly in developing countries

Speakers

– Soumya Swaminathan
– Balaraman Ravindran

Arguments

Global governance is needed with systems linking national bodies to ensure all voices are heard, especially from developing countries


We lack sufficient evidence about AI’s effects on social fabric, children, and different cultural contexts in countries like India


Topics

Artificial intelligence | Closing all digital divides


Unexpected consensus

Technology industry leader strongly supporting UN multilateralism

Speakers

– Brad Smith

Arguments

The UN represents one of humanity’s greatest accomplishments and remains indispensable for global problem-solving


Explanation

It was notable that a major technology industry executive provided such strong endorsement of the UN’s role and multilateral approaches, given that tech companies often prefer less regulated environments


Topics

Artificial intelligence


Agreement on the limitations of AI predictions and the need for humility

Speakers

– Brad Smith
– Anne Bouverot

Arguments

The goal should be using machines to make people smarter rather than just building smarter machines


Different predictions about job displacement lead to different policy outcomes – universal basic income versus training and reskilling


Explanation

Both speakers, despite coming from different sectors, showed remarkable agreement on the unreliability of AI predictions and the need for more measured, evidence-based approaches rather than grandiose claims


Topics

Artificial intelligence


Small state leadership in global AI governance

Speakers

– Josephine Teo

Arguments

Small states like Singapore can contribute through dedicated investments in AI research, safety institutes, and regional cooperation


Explanation

The comprehensive leadership role that Singapore, as a small state, is taking in global AI governance was unexpected, demonstrating that meaningful contributions don’t require being a major power


Topics

Artificial intelligence | Capacity development


Overall assessment

Summary

There was remarkably strong consensus among all speakers on the fundamental principles of AI governance: the centrality of science, the importance of evidence-based policy-making, the UN’s crucial role, the need for global cooperation, and the importance of inclusive participation. The main areas of agreement included the establishment of the Independent International Scientific Panel, the challenges posed by AI’s rapid development pace, and the need to ensure AI serves humanity rather than the reverse.


Consensus level

Very high level of consensus with no significant disagreements identified. This strong alignment suggests a mature understanding of AI governance challenges and broad support for the UN-led approach. The implications are positive for advancing global AI governance, as the lack of fundamental disagreements among diverse stakeholders (government officials, scientists, industry leaders, international organizations) indicates a solid foundation for collaborative action and policy development.


Differences

Different viewpoints

Speed vs. caution in AI governance implementation

Speakers

– Josephine Teo
– Yoshua Bengio

Arguments

There’s tension between moving quickly given AI’s pace and moving carefully based on evidence, but both impulses are necessary


The rapid and uneven growth of AI capabilities creates difficulty in grasping implications, with significant lag between scientific evidence and policy


Summary

Teo emphasizes balancing speed with evidence-based caution as achievable, while Bengio highlights the inherent lag problem between rapid AI development and policy response as a fundamental challenge


Topics

Artificial intelligence


Policy approach – detailed vs. high-level principles

Speakers

– Yoshua Bengio
– Anne Bouverot

Arguments

High-level principles should be applied without going into details since details change rapidly


Different predictions about job displacement lead to different policy outcomes – universal basic income versus training and reskilling


Summary

Bengio advocates for broad principles that avoid technical details due to rapid change, while Bouverot emphasizes the importance of specific evidence and predictions to determine concrete policy approaches


Topics

Artificial intelligence | The digital economy


Focus on machine intelligence vs. human enhancement

Speakers

– Brad Smith
– António Guterres

Arguments

The goal should be using machines to make people smarter rather than just building smarter machines


Science-led governance accelerates solutions and helps identify where AI can do the most good fastest


Summary

Smith focuses specifically on using AI to enhance human capabilities rather than just building smarter machines, while Guterres emphasizes using science to optimize AI deployment for maximum benefit without specifically prioritizing human enhancement


Topics

Artificial intelligence


Unexpected differences

Evidence requirements for policy action

Speakers

– Soumya Swaminathan
– Yoshua Bengio

Arguments

Evidence-based governance requires rapid review and adaptation as the field moves quickly, similar to COVID response


The rapid and uneven growth of AI capabilities creates difficulty in grasping implications, with significant lag between scientific evidence and policy


Explanation

Unexpectedly, both speakers acknowledge rapid change but reach different conclusions – Swaminathan suggests COVID-like rapid evidence processing can work for AI, while Bengio sees the lag as an inherent problem requiring different approaches


Topics

Artificial intelligence


Role of technical details in governance

Speakers

– Anil Ananthaswamy
– Yoshua Bengio

Arguments

We cannot govern what we do not understand – understanding technical details is crucial for effective AI governance


High-level principles should be applied without going into details since details change rapidly


Explanation

Surprising disagreement between the moderator and a key panelist on whether technical details are essential for governance or should be avoided due to rapid change


Topics

Artificial intelligence


Overall assessment

Summary

The discussion revealed surprisingly few fundamental disagreements among speakers, with most conflicts arising around implementation approaches rather than core principles. Main areas of disagreement included the balance between speed and caution in governance, the level of technical detail needed in policy frameworks, and whether to focus on machine intelligence or human enhancement.


Disagreement level

Low to moderate disagreement level with high consensus on core principles but divergent views on implementation strategies. This suggests strong foundation for cooperation but potential challenges in developing specific governance mechanisms and timelines.


Partial agreements

Partial agreements

All agree on the fundamental need for science-based AI governance, but differ on implementation approaches – Guterres emphasizes institutional frameworks, Swaminathan focuses on rapid evidence processing similar to pandemic response, and Bouverot prioritizes understanding before action

Speakers

– António Guterres
– Soumya Swaminathan
– Anne Bouverot

Arguments

Science must be at the center of international AI cooperation to close knowledge gaps and provide shared baseline analysis


Evidence-based governance requires rapid review and adaptation as the field moves quickly, similar to COVID response


Understanding AI through scientific evidence is essential before making policy decisions, as fear comes from lack of understanding


Topics

Artificial intelligence


Both emphasize the need for inclusive AI governance that considers developing country perspectives, but Swaminathan focuses on institutional mechanisms for inclusion while Ravindran emphasizes the evidence gap about AI impacts in specific cultural contexts

Speakers

– Soumya Swaminathan
– Balaraman Ravindran

Arguments

Global governance is needed with systems linking national bodies to ensure all voices are heard, especially from developing countries


We lack sufficient evidence about AI’s effects on social fabric, children, and different cultural contexts in countries like India


Topics

Artificial intelligence | Closing all digital divides


Both support the UN panel’s multidisciplinary approach, but Bengio emphasizes global power dynamics and developing country impacts while Teo focuses on technical complexity requiring diverse expertise

Speakers

– Yoshua Bengio
– Josephine Teo

Arguments

The panel’s global and multidisciplinary aspect rooted in the UN is important for addressing power relationships and effects on developing countries


The panel’s multidisciplinary approach covering machine learning, social science, and ethics is necessary for complex AI governance challenges


Topics

Artificial intelligence


Similar viewpoints

Both emphasized the need to move beyond hype and speculation toward evidence-based understanding as the foundation for effective policy-making

Speakers

– António Guterres
– Brad Smith

Arguments

Policy cannot be built on guesswork or hype but needs facts that can be trusted and shared across countries


People often disagree on solutions because they lack common understanding of problems; science helps create shared contextual understanding


Topics

Artificial intelligence


Both drew parallels between AI governance challenges and pandemic response, emphasizing the need for rapid evidence processing and policy adaptation in fast-moving technological contexts

Speakers

– Yoshua Bengio
– Soumya Swaminathan

Arguments

The rapid and uneven growth of AI capabilities creates difficulty in grasping implications, with significant lag between scientific evidence and policy


Evidence-based governance requires rapid review and adaptation as the field moves quickly, similar to COVID response


Topics

Artificial intelligence


Both emphasized that AI should serve humanity and enhance human capabilities rather than simply pursuing technological advancement for its own sake

Speakers

– Brad Smith
– António Guterres

Arguments

The goal should be using machines to make people smarter rather than just building smarter machines


Science-led governance accelerates solutions and helps identify where AI can do the most good fastest


Topics

Artificial intelligence


Both highlighted the need for context-specific evidence and governance approaches that account for different cultural, economic, and social contexts, particularly in developing countries

Speakers

– Soumya Swaminathan
– Balaraman Ravindran

Arguments

Global governance is needed with systems linking national bodies to ensure all voices are heard, especially from developing countries


We lack sufficient evidence about AI’s effects on social fabric, children, and different cultural contexts in countries like India


Topics

Artificial intelligence | Closing all digital divides


Takeaways

Key takeaways

Science must be at the center of international AI cooperation to provide shared baseline analysis and close knowledge gaps between countries


The newly established Independent International Scientific Panel on AI will serve as a crucial bridge between scientific evidence and global AI policymaking


AI governance requires balancing the need to move quickly with the pace of technology development while moving carefully based on emerging evidence


The UN’s unique legitimacy and inclusiveness makes it indispensable for preventing fragmentation in the global AI governance landscape


Evidence-based policy making is essential – policies cannot be built on hype or guesswork but must be grounded in facts that can be trusted and shared across countries


AI development should focus on using machines to make people smarter rather than just building smarter machines


Global AI governance must be inclusive, ensuring voices from developing countries and marginalized communities are heard in policy discussions


High-level AI principles (transparency, accountability, fairness, safety) have substantial convergence, but the challenge lies in operationalizing them across different contexts


Resolutions and action items

The UN General Assembly confirmed 40 experts for the Independent International Scientific Panel on AI, with work beginning immediately


The panel will deliver its first report ahead of the Global Summit and Global Dialogue on AI Governance in July


Singapore will host the second International Scientific Exchange on AI Safety on May 17-18


ASEAN will work collectively to develop AI safety benchmarks reflecting regional concerns


India will continue implementing its National AI Governance Framework with public-private partnerships for compute resources


Microsoft committed to putting full energy and resources to support UN AI governance efforts


Countries should invest in standardized evaluation methodologies that work across different regulatory contexts


Unresolved issues

Lack of sufficient evidence about AI’s effects on social fabric, children, and different cultural contexts, particularly in developing countries


Uncertainty about causal relationships in AI effectiveness (e.g., whether students use AI more because it’s effective or become more effective because they use it more)


Insufficient benchmarking systems to evaluate AI effectiveness in specific sectors like agriculture and education in different national contexts


Potential unanticipated risks and harms from AI that have not yet been considered or studied


The challenge of keeping scientific assessments and policy responses in pace with rapid AI development


How to ensure AI benefits are widely shared and don’t exacerbate global inequalities


The need for capacity building so all countries can meaningfully engage with technical AI challenges


Suggested compromises

Balancing the tension between moving quickly (given AI’s rapid pace) and moving carefully (based on emerging evidence) through integration of science and policy


Developing high-level principles that can be applied without going into technical details, since details change rapidly


Using technology-enabled guardrails for AI deployment rather than relying solely on traditional policy mechanisms


Adopting a ‘techno-legal’ approach that embeds governance through technical design, as demonstrated by India’s digital public infrastructure


Creating interoperable approaches across different jurisdictions while allowing for local adaptation


Viewing scientific input as a foundation for effective governance rather than a constraint on policy flexibility


Thought provoking comments

We cannot govern what we do not understand… AI does not stop at borders, and no nation can fully grasp its implications on its own. If we want AI to serve humanity, policy cannot be built on guesswork. It cannot be built on hype or disinformation. We need facts we can trust and share across countries and across sectors. Less noise, more knowledge.

Speaker

António Guterres


Reason

This comment established the fundamental premise that effective governance requires deep understanding, not speculation. It reframed AI governance from a reactive, fear-based approach to a proactive, evidence-based one. The phrase ‘less noise, more knowledge’ became a recurring theme throughout the discussion.


Impact

This opening statement set the entire tone for the session, establishing science-based evidence as the cornerstone of all subsequent discussions. Every speaker referenced back to this need for factual understanding over speculation, making it the foundational framework for the conversation.


Maybe unlike in the case of climate, the scientists themselves don’t always agree on what to expect for the future or even how to interpret the science that exists… Even if we’re not certain about a particular risk, we might have clues about it. But if the risk has huge severity, in other words, if it does unfold, then it could be catastrophic, then policymakers need to make attention.

Speaker

Yoshua Bengio


Reason

This comment introduced crucial nuance by acknowledging that AI governance faces unique challenges compared to other scientific policy areas like climate change. It highlighted the paradox of needing to act on uncertain but potentially catastrophic risks, introducing the concept of precautionary governance based on severity rather than certainty.


Impact

This shifted the discussion from seeking definitive answers to managing uncertainty intelligently. It influenced subsequent speakers to focus on adaptive governance frameworks and the importance of continuous monitoring rather than waiting for complete scientific consensus.


There is a well-known economic theory that says that humanity is, in many ways, almost destined to repeat its great economic mistakes every 80 years… Just as there is a risk that humanity forgets the mistakes it made 80 years ago, humanity runs the risk of forgetting the great successes it created 80 years ago. It was just over 80 years ago that the world came together to create the United Nations.

Speaker

Brad Smith


Reason

This historical perspective was profound because it recontextualized current AI governance challenges within the broader arc of human institutional memory and cooperation. It elevated the discussion from technical policy details to fundamental questions about how humanity learns from history and builds lasting institutions.


Impact

This comment fundamentally shifted the conversation’s scope, moving it from immediate technical concerns to long-term institutional thinking. It provided historical legitimacy for multilateral approaches and influenced subsequent speakers to emphasize the UN’s unique role in global coordination.


One of the reasons people so often disagree about the solution is they don’t have a common understanding of the problem. They don’t spend enough time talking about the problem… Why does that matter today? Because what we’re here to talk about today is all about creating a more common understanding together based on science of where artificial intelligence is going.

Speaker

Brad Smith


Reason

This insight identified a fundamental flaw in policy discussions – the tendency to jump to solutions without establishing shared problem definitions. It provided a meta-framework for understanding why AI governance is so contentious and positioned scientific assessment as the solution to this foundational issue.


Impact

This comment reframed the entire purpose of the scientific panel from providing answers to creating shared understanding of questions. It influenced the panel discussion to focus more on evidence-gathering methodologies and less on prescriptive solutions.


In COVID, we had to review a couple of hundred publications every day to understand what was happening… I think we may be in a similar situation with AI… Some of our recommendations were relevant in high-income countries but not in low-income countries because the context is very different… If AI has to work for everyone, then we need to make sure that those voices are heard.

Speaker

Soumya Swaminathan


Reason

This comment drew powerful parallels between pandemic response and AI governance while highlighting the critical issue of contextual relevance. It introduced the concept that universal technologies require locally-informed governance, challenging one-size-fits-all approaches.


Impact

This shifted the discussion toward inclusive governance models and highlighted the importance of diverse perspectives in scientific assessment. It influenced subsequent speakers to emphasize regional differences and the need for culturally-sensitive approaches to AI governance.


We don’t have enough evidence about how AI is even affecting the social fabric, how are children getting increasingly isolated with the adoption of AI… All of these stories are coming to us from the west, so what is it that’s happening in India?

Speaker

Balaraman Ravindran


Reason

This comment exposed a critical gap in the global AI discourse – the dominance of Western research and perspectives in understanding AI’s social impacts. It challenged the assumption that AI effects are universal and highlighted the need for region-specific research.


Impact

This comment brought urgent attention to research gaps and geographic bias in AI studies. It influenced the discussion to focus more on the need for diverse, locally-relevant research and evidence-gathering that reflects different cultural and economic contexts.


If your potential or probable outcome is the end of jobs, then you need to think about universal basic income… If what economists are saying is that 80% of the jobs will be transformed, then the policy outcome is training, skilling, reskilling… That’s why listening to economists and having the International Labor Organization and other institutions really follow closely what is happening is super important.

Speaker

Anne Bouverot


Reason

This comment demonstrated how different scientific assessments lead to fundamentally different policy responses. It showed the concrete policy implications of getting the evidence right, using employment impacts as a clear example of how scientific uncertainty translates to policy uncertainty.


Impact

This practical example crystallized the abstract discussion about evidence-based policy into concrete terms that all participants could understand. It reinforced the importance of accurate scientific assessment and influenced the closing discussions about operationalizing scientific insights.


Overall assessment

These key comments collectively transformed what could have been a technical discussion about AI governance into a profound examination of how humanity makes collective decisions about transformative technologies. The discussion evolved through several phases: Guterres established the foundational principle that governance requires understanding; Bengio introduced the complexity of governing under uncertainty; Smith provided historical context and identified the problem-definition challenge; Swaminathan brought practical experience from pandemic response and highlighted equity concerns; Ravindran exposed research gaps and geographic bias; and Bouverot demonstrated concrete policy implications. Together, these insights created a rich, multi-layered conversation that moved beyond simple calls for regulation to explore the deeper challenges of building legitimate, effective, and inclusive global governance for emerging technologies. The discussion successfully bridged technical expertise, policy experience, and institutional wisdom to create a framework for thinking about AI governance that is both scientifically grounded and politically realistic.


Follow-up questions

How can we develop standardized evaluation methodologies that work across different regulatory contexts for AI governance?

Speaker

Josephine Teo


Explanation

This is crucial for operationalizing high-level AI principles like transparency, accountability, fairness, and safety across different jurisdictions and ensuring interoperability.


What are the psychological effects of chatbots on people, particularly in different cultural contexts?

Speaker

Yoshua Bengio


Explanation

Bengio noted that while there is anecdotal evidence, scientific studies are just beginning to emerge, and there’s a need to understand these effects across different regions and cultures.


How is AI affecting children’s social isolation and mental health, particularly comparing urban vs rural contexts in India?

Speaker

Balaraman Ravindran


Explanation

There’s concern about increasing isolation among children with AI adoption, but evidence is mostly coming from Western studies rather than Indian contexts with different cultural setups.


What benchmarks can evaluate the efficiency and effectiveness of AI models in agriculture in India?

Speaker

Balaraman Ravindran


Explanation

As governments push for AI adoption in agriculture, there’s a need for India-specific benchmarks to assess these interventions and understand potential flaws.


What is the causal relationship between AI usage frequency and learning effectiveness in education?

Speaker

Balaraman Ravindran


Explanation

Preliminary studies show effectiveness correlates with usage habits, but it’s unclear whether more usage leads to better effects or better effects lead to more usage.


How can we ensure AI governance frameworks account for diverse user contexts, such as low-income women farmers in remote areas versus large-scale farmers with advanced machinery?

Speaker

Soumya Swaminathan


Explanation

Different contexts require different approaches, and governance must ensure all voices are heard, particularly from underrepresented groups who may use technology differently.


What are the specific impacts of AI on youth in different cultural and economic contexts?

Speaker

Balaraman Ravindran


Explanation

While there are stories from the West about AI dependence among children and mentally challenged individuals, there’s insufficient evidence about what’s happening in India and other developing countries.


How can we develop technical frameworks for embedding governance through design (techno-legal approaches) for AI systems?

Speaker

Ajay Sood


Explanation

Building on India’s experience with digital public infrastructure, there’s a need to develop frameworks and technologies that embed governance directly into AI system design.


What unanticipated risks and harms from AI have not yet been considered or studied?

Speaker

Soumya Swaminathan


Explanation

As AI deployment accelerates, there may be unforeseen negative consequences that require proactive research and monitoring to identify and address.


How can we create and maintain global databases of scientific data that are accessible to researchers worldwide for AI-assisted scientific discovery?

Speaker

Anne Bouverot


Explanation

To realize AI’s potential in accelerating scientific discovery, there’s a need for internationally funded and constructed scientific databases that are openly available.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.