How to make AI governance fit for purpose?
10 Jul 2025 16:00h - 16:45h
How to make AI governance fit for purpose?
Session at a glance
Summary
This discussion at the AI for Good conference focused on how different countries approach AI governance, balancing innovation with regulation while maximizing benefits and minimizing risks. The panel featured representatives from the United States, France, China, and Singapore, each presenting their nation’s perspective on AI development and governance strategies.
Jennifer Bachus from the US emphasized the Trump administration’s deregulatory approach, arguing that excessive regulation could stifle AI innovation and harm America’s technological leadership. She stressed the importance of multi-stakeholder processes and warned against over-regulation that might “strangle” transformative AI technologies. Anne Bouverot from France highlighted the Paris AI Summit’s focus on practical actions rather than regulation, announcing significant European investments and initiatives like the sustainable AI coalition to promote AI for public good.
China’s Vice Minister Shan Zhongde discussed their emphasis on open-source development, citing DeepSeek as an example of cost-effective AI innovation, while emphasizing international cooperation and standards development. Singapore’s Chuen Hong Lew advocated for a “light touch” regulatory approach focused on building trust through evidence-based governance, emphasizing the need for practical guidelines and international collaboration.
When asked what keeps them awake at night regarding AI governance, panelists cited concerns about balancing development with safety, ensuring inclusive multi-stakeholder participation, preventing authoritarian misuse of AI, and managing the rapid pace of technological change. All speakers emphasized the importance of international cooperation, talent development, and ensuring AI serves human-centered purposes. The discussion concluded that effective AI governance requires adaptive institutions, strategic investments, and collaborative frameworks that can keep pace with rapidly evolving technology while serving the public good.
Keypoints
## Major Discussion Points:
– **Balancing Innovation and Regulation**: The central tension between fostering AI innovation versus implementing governance frameworks, with speakers emphasizing the need to avoid “over-regulation” that could stifle technological advancement while still ensuring responsible development.
– **International Cooperation and Multi-stakeholder Governance**: The necessity of collaborative approaches involving governments, private sector, researchers, and civil society across different countries with varying regulatory philosophies (US deregulatory approach, European regulatory framework, China’s state-led development, Singapore’s “light touch” model).
– **AI Safety and Security Concerns**: Addressing dual-use risks, including potential misuse by authoritarian regimes for surveillance and military purposes, data security issues, and the need for testing and benchmarking frameworks to ensure AI systems are trustworthy and safe.
– **Economic and Social Impact Management**: Concerns about job displacement, the need for reskilling and upskilling programs, addressing the digital divide, and ensuring AI benefits are distributed equitably across different countries and populations.
– **Rapid Technological Pace and Governance Adaptation**: The challenge of keeping governance structures and policies current with the extremely fast pace of AI development, requiring adaptive and evidence-based approaches rather than rigid regulatory frameworks.
## Overall Purpose:
The discussion aimed to explore how different countries and stakeholders can work together to maximize AI’s benefits while minimizing risks, focusing on practical governance approaches that enable innovation while ensuring responsible development and deployment of AI technologies globally.
## Overall Tone:
The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspectives and regulatory philosophies. Speakers emphasized partnership and shared goals rather than conflict, with all participants acknowledging both the tremendous opportunities and serious challenges posed by AI. The tone was professional and forward-looking, with speakers showing mutual respect for different approaches while advocating for their respective positions on the innovation-regulation spectrum.
Speakers
– **Gabriela Ramos**: Moderator of the panel discussion, mentioned as running for a position at UNESCO and having worked with UNESCO on the recommendation on the ethics of artificial intelligence
– **Jennifer Bachus**: Acting head of the Bureau of Cybersecurity and Digital Policy in the USA
– **Shan Zhongde**: Vice Minister of Industry and Information Technology and Policy of China
– **Chuen Hong Lew**: Chief Executive Officer of Infocom Media Development Authority of Singapore (note: transcript mentions “India” but context clearly indicates Singapore)
– **Anne Bouverot**: Tech envoy of France, mentioned as having managed the AI Action Summit
**Additional speakers:**
– No additional speakers were identified beyond those in the provided speakers names list.
Full session report
# AI Governance Panel Discussion at AI for Good Conference
## Executive Summary
This panel discussion at the AI for Good conference examined national approaches to AI governance, featuring senior government officials from the United States, France, China, and Singapore. Moderated by Gabriela Ramos, who is running for a UNESCO position and worked on UNESCO’s AI ethics recommendations, the discussion focused on how different countries balance AI innovation with appropriate governance frameworks.
## Panel Participants
– **Jennifer Bachus** – Acting Head of the Bureau of Cybersecurity and Digital Policy, United States
– **Anne Bouverot** – France’s Tech Envoy, who managed the AI Action Summit
– **Vice Minister Shan Zhongde** – China’s Ministry of Industry and Information Technology
– **Chuen Hong Lew** – Chief Executive Officer, Singapore’s Infocom Media Development Authority
## Key National Positions
### United States Perspective
Jennifer Bachus presented the Trump administration’s deregulatory approach, emphasizing that “over-regulation could kill the transformative AI industry and discourage necessary risk-taking for innovation.” She stressed America’s need to maintain technological leadership through regulatory regimes that foster rather than constrain AI development.
Bachus highlighted security concerns about authoritarian regimes stealing AI technology for “military, intelligence, surveillance, and propaganda purposes,” arguing this represents a significant national security threat requiring protective measures rather than restrictive domestic regulation.
### French Perspective
Anne Bouverot described Europe’s evolution from regulation-focused approaches toward innovation and practical outcomes. She announced major European investments, including 200 billion euros of investment and 400 million euros of commitments, along with initiatives like the sustainable AI coalition.
Bouverot emphasized that the Paris AI Summit focused on “actions and practical outcomes rather than regulation, with emphasis on innovation over regulatory constraints.” She noted AI’s unique characteristic of having “research roots and very quick societal impacts,” necessitating involvement from researchers, engineers, companies, governments, and civil society.
### Chinese Perspective
Vice Minister Shan Zhongde presented China’s balanced approach emphasizing both development and safety. He discussed China’s focus on open-source development, citing DeepSeek as an example of cost-effective AI innovation, while advocating for “people-centred AI that is traceable, reliable and monitorable.”
Shan emphasized international collaboration through the ITU and global standards development, expressing concern about preventing an “intelligence divide” that could increase development gaps between countries. He highlighted various risks including data leakage, model hallucinations, and social structural changes.
### Singapore Perspective
Chuen Hong Lew advocated for a “light touch” regulatory approach focused on building trust through evidence-based governance. He noted that “light touch actually requires extremely heavy lifting” because it involves “building an entire ecosystem” rather than simply creating laws.
Lew emphasized the challenge that “the rate of change outside is greater than the rate of change inside,” requiring deliberate action despite rapid technological progress. He referenced the principle of “festina lente” (make haste slowly) as a framework for balancing urgency with careful consideration.
## “What Keeps You Up at Night” Responses
When asked about their primary concerns, panelists provided revealing insights:
**Anne Bouverot** expressed concern about job disruption and the need for comprehensive training, skilling, and upskilling programs to help people adapt to AI-driven changes.
**Vice Minister Shan** worried about various technical and social risks, including ensuring AI systems remain reliable and controllable while managing their broader societal impacts.
**Jennifer Bachus** focused on national security threats, particularly the risk of AI technologies being misused by hostile actors for surveillance and propaganda purposes.
**Chuen Hong Lew** was concerned about the pace of change, specifically asking “How do you re-skill such that the rate of change of human potential is faster than the rate of change of the algorithm?”
## Key Initiatives and Announcements
Several concrete initiatives were highlighted during the discussion:
– **AI Verified Open Source Foundation** – mentioned as part of collaborative efforts
– **Project Boonshot** – referenced as an innovative approach
– **Sustainable AI Coalition** – announced by France to promote AI for public good
– **European Investment Package** – 200 billion euros in investment with 400 million euros in specific commitments
– **DeepSeek** – cited by China as an example of cost-effective open-source AI development
## Common Themes
Despite different national approaches, several shared themes emerged:
– **Multi-stakeholder involvement** – All speakers acknowledged the need for collaboration between governments, private sector, researchers, and civil society
– **Innovation focus** – Each representative emphasized avoiding over-regulation that could stifle technological advancement
– **International cooperation** – All participants recognized AI’s global nature requires coordinated approaches
– **Human-centric development** – Speakers emphasized ensuring AI serves human welfare and addresses societal needs
## Conclusion
The discussion revealed both convergence and divergence in national AI governance approaches. While all participants agreed on the importance of fostering innovation and avoiding excessive regulation, they differed in their specific strategies, security concerns, and implementation mechanisms. The conversation highlighted the ongoing challenge of developing effective AI governance frameworks that can keep pace with rapid technological advancement while ensuring responsible development that serves the public good.
Moderator Gabriela Ramos concluded by noting the value of continuing such dialogues to advance international cooperation on AI governance, emphasizing the importance of maintaining open channels for discussion among diverse stakeholders.
Session transcript
Gabriela Ramos: Hello, everybody, and it’s great to be here, Doreen, Thomas, great to be in A.I. for good And to discuss what is in the mind of everybody, which is to get the maximum benefit from these technologies and reduce the risks And here we are accompanied by a fantastic panel because you will be sharing with us very different perspectives on how to achieve this goal And I’m sure that by the end of this conversation we will be able to learn a little bit how the way you handle the things can deliver good outcomes Because I guess that’s what we are all aiming at And we have with us, as was mentioned, Jennifer Bachus, acting head of the Bureau of Cybersecurity and Digital Policy in the USA Thank you, Jennifer We have Shan Zhongde, Mr. Vice Minister of Industry and Information Technology and Policy of China We were in the launch, welcome And we also have Chuen Hong Lew, Chief Executive Officer of Infocom Media Development Authority of India So it’s great to have you all And great to have you, Anne Bouverot, you know her, she’s the tech envoy of France And maybe I will give you a little bit of time, Anne, to get your breath and we will start with the US So, Jennifer, we know that there has been a change in the scope of how the current administration wants to deal with AI We know that now priority has been given to continue with the innovation ladder and try to minimize the question of regulations So, given that these technologies are global, I want to know how are you going to handle the fact that other areas like Europe or Japan, China Might have different approaches and you will need to engage with them So how would you do that?
Jennifer Bachus: Thank you and really what an honor to be here today and to be at this event Which actually I heard a lot about last year, all the excitement over the robots, the sort of fun and exciting And to be at really an event where you get to see the forefront of how multi-stakeholderism can play into AI governance And I think ultimately that is the point of view of the US government is that this needs to be a multi-stakeholder process And the United States has always been at the forefront of AI innovation driven by the strength of our free markets Our world-class research institutions and entrepreneurial spirit ultimately driven by the multi-stakeholders So it’s the policy of the United States to protect the US private sector innovation and technological leadership And to sustain and enhance America’s global AI leadership in order to promote human flourishing, economic competitiveness and national security And that’s why President Trump took very strong executive action to sign executive orders to roll back previous regulations on the AI industry The Trump administration believes that AI will have countless revolutionary applications in economic innovation, job creation, national security, healthcare, free expression and beyond All of which actually we keep hearing about every day at this conference And we think that restricting its development now would mean essentially paralyzing one of the most promising technologies we have seen in generations America wants partners. That is my message here today We want to embark on the AI revolution before us with the kind of spirit and openness and collaboration But to create that kind of trust, we need regulatory regimes around the world that foster the creation of AI technology rather than strangle it We need all of our friends, including here in Europe, to look to this new frontier with optimism Because that is why we are here today. AI for good The administration is troubled by reports that some governments are looking at tightening the screws on US tech companies with international footprints We won’t accept that and we think it’s a terrible mistake We need to focus now on the opportunity to unleash our most brilliant innovators and use AI to improve the well-being of our nations and that of our peoples The administration will assure that American AI technology continues to be the gold standard worldwide And we are the partner of choice for other foreign countries and certainly businesses as they expand their use of AI Excessive regulation of the AI sector could kill a transformative industry just as it’s taking off And we’ll make every effort to encourage pro-growth, deregulatory AI policies worldwide We really face an amazing, extraordinary prospect of a new industrial revolution But it won’t come to pass if over-regulation discourages innovation from taking the risk necessary for achievement Nor will it occur if we allow AI to become dominated by massive players looking to use the tech to censor or control users’ thoughts Thank you
Gabriela Ramos: Thank you, thank you very much, Jennifer I’m very pleased that you used the word over-regulation because that qualifies the whole debate And of course it’s the societal choices of where we want to put the limits But I think that this is where we want to put the current debate And now over to you, my dear Anne You were exceptional in the managing of the AI Action Summit We all were there I think that the way you framed it, President Macron framed it, was exactly to see how much we can join forces to get AI for good And to deliver good outcomes So I would like to bring this French perspective into the broader European perspective And how would you connect with what is happening in Europe and what is the French position between this regulation, innovation Which for me is a false dilemma, but over to you
Anne Bouverot: Thank you, thank you so much Can you hear me? Can you hear me? Yes, this is better Thank you so much for having me I’m sorry I was a little late to get on the stage It’s such a pleasure to be here and to be part of this panel and of this day And I really want to thank ITU and Doreen for this invitation And I’m delighted to be on this panel with you all In terms of the Paris AI Summit that we held in February After the previous summits that were held in Bletchley Park and before the next one in India We didn’t really want to focus on regulation We really wanted to focus on actions and practical outcomes And bringing people together on discussions and announcements On what we can do to make AI something that benefits people and societies So we made a few announcements, some at European level, some with different parameters There was a Europe announcement which was 200 billion euros of investment in Europe By companies and by the European Commission on European AI champions To promote the development and the deployment of AI in Europe But we also launched initiatives such as the current AI foundation Or initiatives to help develop public interest AI And in particular data sets or tool sets that will help make AI something more concretely There for projects that will benefit us all We had more than 400 million euros of commitments Mostly from philanthropies but also from countries and companies to this initiative And we also announced the sustainable AI coalition We now have close to 200 members, people who have joined And this is a coalition of the willing Again, none of this is regulation The focus was really on what we can do to promote innovation and action And in terms of Europe, you’re right, the word that often comes to mind in terms of Europe Is maybe more regulation than innovation I think things are changing and I very much welcome this change This is not about dismantling all the regulation But this is about really putting a much stronger focus on innovation This is what Europe has been doing with the AI continent plan With investments in research, investments in AI factories and AI gigafactories And I’m very pleased to see this new focus in Europe Great
Gabriela Ramos: Thank you so much. Let me then move to China and I feel that this is a very important representation of the leading countries that are advancing AI. We all were witness of DeepSeek, this major foundational model that was a fraction of the cost and also of the environmental impact and we know that this is being used for many applications, Deputy Minister. So I want to please ask you to share with our audience what are the latest developments in this area in China and how would you picture open source as an element to advance this innovation with inclusion and what are the perspectives there, please?
Shan Zhongde: Our chairman, President Xi, has paid great importance to the development of AI. I think this is the foundation of the revolution of industrialization in China. We are focusing for the green sustainable development and intelligent industrialization to construct a secure and a safe ecosystem in China and this is the purpose to build the sound ecosystem to promote the innovative and also to break through the foundation technology and the construct and also to come up with a large model representative such as a DeepSeek to contribute our power to the global development. Secondly is to empower the application among the industries such as to promote the manufacturing to green and sustainable development in such a case, especially for the petrochemical, steel, healthcare for those domains. We are trying to promote productivity, the quality of the product and a green sustainable development level. Thirdly, that is to reinforce the international collaboration based on the ITU and also to proactively share some of the strategies of China’s AI development. We also want to work together to promote the international standards of the AI. Also, we are open-minded and inclusive to construct the international standards framework for the open source. For open source, I think as a collaborative platform, it belongs to the global community. It’s a great power for the world and China is emphasizing on the open source governance and the development. First, it’s the industry need-driven and it’s for the academy and the research institutes and working together, we have a series of open sources systems through iterations coming out with new solutions. Second, working on the safety. We are setting up the guidelines and the standards and setting up for the risk prevention mechanisms and solidify the foundations. From China’s practices, a lot of businesses are already embracing AI, working with global business and to benefit from these advantages that we have this universal open economic and ecosystem and to hope to welcoming these challenges and opportunities. Thank you.
Gabriela Ramos: Thank you so much, Vice Minister, and yes, welcome the international cooperation and that’s why we are here with the under ITU ceiling, so great to have you. Maybe let’s then go to Singapore. Also, we have heard many speakers from your country sharing the very concrete agenda that you have on developing AI, looking at avoiding over-regulation and having a light touch kind of approach, but how would you advance this strategy? How would you share the main elements and what are the advantages and how would you grapple with the very fast pace in which these technologies are moving?
Chuen Hong Lew: Well, Gabriela, thank you so much. Really nice to see you again and likewise to all my counterparts here. We’ve had the pleasure to work together and of course it’s a real pleasure to be here. I’ve got to thank Doreen and ITU. That is a very interesting question and it’s not a light question at all, but maybe if I were to take a step back, as policy makers, all of us sitting here, what is it ultimately that we’re trying to aim for and achieve? For us, perhaps it’s not regulation per se, but if I may posit, it’s actually trust, because at the core of it, while the two do overlap, they’re not quite identical and for us, building a trusted ecosystem is at the core, because once you put just enough guardrails in place to ensure that a frontier technology like AI is used responsibly, it actually gives maximum space as far as utilisation and adoption is concerned. So for us, the difference here is that it is a virtuous cycle whereby the guardrails then remove derailers and give that maximum opportunity for innovation. So for us as a small country, that is that North Star that drives how we think about regulation, governance and so on. You talked about light touch. I was going to be slightly cheeky to say light touch actually requires extremely heavy lifting and the heavy lifting here is because it has to be viewed as building an entire ecosystem and not just about the laws and maybe if I elaborate on maybe three perspectives that might be useful. The first is to take a very science-based and evidence-based approach. You talked about how this space is moving very, very, very quickly. What you know today actually is very different from what was happening six months ago and I guarantee you six months from now, where the frontiers will be, will be very different. So investing in our digital trust centre, which is our modest effort at advancing the sciences as far as safety and governance is concerned, being part of the founding AI Safety Institute network, corralling the best minds, Turing Award winners, 100 over the best scientists from both east and west to come up with what we call the Singapore Consensus and R&D priorities. I think all of these are extremely critical to make sure that we stay at the forefront of where that testing and safety would be. But at the same time, we also want to get our hands dirty. Those of you who build AI models and we do that, realise that it is not so simple when it comes to testing and benchmarking and because we get our hands dirty, we kind of help bridge what the science can do, what the evidence can do and what real deployers out there in the industry actually face and come up with a practical approach. So that is our first perhaps pillar. The second pillar, working very closely with industry. I don’t think that this can be done working alone. We have an AI Verified Open Source Foundation. It gathers everything from model deployers to app developers to third-party testers. It has an open source toolkit. We think an open source toolkit called Project Boonshot is extremely critical, especially from a capacity building perspective as there are many other very small countries like Singapore that may not have the full wherewithal. Through that, because of that sandbox, because of that continuous iteration, we have distilled what we think are practical first steps. We don’t want to let good or rather we don’t want to let perfect get in the way of good and to issue practical guidelines associated with that. Both of these are very critical because it keeps us humble and honest that these things are moving very, very quickly. Perhaps the last piece is really I don’t think any of us sitting here would like to see the ecosystem fragment around the world. So a lot of efforts as far as standards, standards building. SC42, as some of you are familiar, we are leading a work effort to try and corral some of these as practical standards that industry can use. And of course, working with like-minded partners. This can be big tech. These are other countries. Through ASEAN, Singapore leads the ASEAN work group on AI, as well as the Digital Forum of Small States. FOSS is a big part of the UN ecosystem, 108 small countries and through hopefully that capacity building, we can bring everyone along. So if I kind of wrap up, ultimately we’re driven by the north star of that AI for good and if we can bring an overall balance. and the Zeitgeist is one of optimism. And we inoculate the population more broadly. When we say inoculate, means that get them to understand both the opportunities, but also be very clear about the risks. Then I think the overall benefits will outweigh some of the things that we’re worried about. And there is an optimism and opportunity that unfolds as a result of AI. Thank you.
Gabriela Ramos: Thank you so much, Chuen Hong. And as you have heard, we started with a very narrow question in terms of regulation, innovation, and all of the speakers have broadened this discussion to discuss about the ecosystem, to discuss about all of the elements that we bring together, including international cooperation, including working in a multi-stakeholder, including the industry, but also including the policies. And I come from that perspective because I have this weakness of being a policy wonk. And at the end is not only about understanding the technologies, but understanding the policy to outcomes and to see how do we incentivize these issues. And I’ve been working with UNESCO on the recommendation on the ethics of artificial intelligence. Now I’m running for the job in UNESCO. But at the end, I feel it’s very important that we dissect this discussion to really see all of the elements that bring us together. Because each one of your statements bring us closer together in how do we govern these technologies. Let me then get to the second question for the panel. And I think there is going to be, they call it, how they call it, my dear Doreen, mentee, mentee? There’s gonna be a mentee that you need to answer to while I also ask it to the speakers because I love this question that I’m gonna pose to you. And we’ll go from Anne, Vice Minister, Jennifer and Shan. We will go in this order. And then you will answer also in the public. What aspect of AI governance keeps you up at night? Anne.
Anne Bouverot: Thank you so much, Gabriela. Thank you for this. I’m lucky to go first because by the time everyone has spoken, we will have said everything. But let me say maybe two things so that I leave some things for everyone. The first one, we’ve all mentioned it. You exactly said that. This is inclusion and multi-stakeholder. But I’d like to speak about it from a perspective that is rooted in what AI is. What is AI? AI is really something, of course it’s a technology that the name dates back to the 1950s. But the more recent wave of AI is a development that stems from science and researchers. And that goes super quickly from researchers to engineers, to startups, to companies and becomes part of everyday life. So we’re faced with a technology that has research roots and very quick societal impacts. And I think if only for this reason, we need to involve in the global governance discussions researchers and we have Nobel Prizes and Turing Prizes and all sorts of researchers, economists as well and all the researchers. We need to involve engineers. We need to involve companies. That goes from startups to large companies, from little tech to big tech to middle tech. We absolutely need to have the private sector. Of course, we’re here, we need the governments and we need the governments from small, medium and large countries. I take this opportunity to commend Singapore on being such an AI-ready country. You’ve done some fantastic efforts for that. And also we need civil society. We need the people who express some fears, look at the risks and want to develop AI for the common good. So I guess I’m trying to explain why and give a few concrete ways in which I think this needs to be an inclusive multi-stakeholder governance. My second point is this is called how to make AI governance fit for purpose. What is the purpose? Sometimes we can have a tendency to say we need to make governance the best possible governance, but the main question is what for? What are we trying to do? We’re trying to make sure that this revolution that AI brings is something that can benefit to everyone and we want to minimize the risks. So we want to address the fact that there’s a very strong potential for disruption on jobs and we want to make sure we focus on the best actions, which for now seem to be training, skilling, upskilling people. Maybe there are others, but I think there’s a very important role of what should the governments be for the impact on jobs? What can countries do individually and they’re doing it and what should we do together? There’s the area of security, trust, safety where maybe there are things or there are things that we’re doing together to make sure that there is some testing. Again, we shouldn’t overdo it, but we should make sure we focus on the risks that are concrete and that we can work on. And there’s a number of other specific areas where we should, you say it really with a light touch and I like light touch, but I like focus. I think we should really try to think about what we want to do and how we do it. And that’s the purpose of governance. So those were my two points. I’m sure we will have lots of very interesting comments from my co-panelists.
Gabriela Ramos: I’m sure you don’t sleep at night because it’s really how to get it right. What Anne has just put together is how to get the whole thing right to deliver for people. Vice minister.
Shan Zhongde: Thank you. And the LLM existence has largely increased the AI’s capability for handling complex jobs and the AI’s applications used in different sectors. And there are several aspects of the AI governance for the future. One is how to balance the development and the safety. AI has become, and this is an advanced technology and coordinated development. It’s very urgent and it’s also a long-term plan. And how do we better deal with the different kind of risk that brought by the AI. As an example, AI is bringing this data leakage and these model hallucinations risks and AI activities and the consequences and these problem with the relationships between people and also job impacts and the social structural changes. And the number three, how do we do better global governance? And AI is bringing this intelligence divide will increase the development level among countries. This AI governance needing us to join hands from with everyone, all states. And we China will be like a people centered AI for good and we can traceable and reliable and an AI that we can monitor. And we have this different classifications and under our control and the monitoring. One is to increase the innovation. Second is for the universal development and to increase its connection with the infrastructure, promote innovations and for promote applications and allow for the economic sustainable development. Second is to get the sectors consensus to increase of our intelligence, increase exchanges with all countries for AI for good and to come up to consensus and explore a equitable and inclusive governance models and the deepening of our international corporations going for AI for good and have a mindset for collaborations, open and sharing and the win-win situations and the practice and the work together and the share together and the build together. And we set up the international standards and the use these standards for applications and the promote AI for good and we hope. all the countries that we can work together on the international exchanges and the conversations and to make sure A.I. will be benefited for the human society. Thank you.
Gabriela Ramos: Thank you very much, Vice Minister. Jennifer, over to you.
Jennifer Bachus: So, in addition to my very strong concern that essentially A.I. governance is going to strangle A.I. in its bed, for lack of a better term, I think the United States is also concerned is that authoritarian regimes are stealing and using A.I. to strengthen militaries, intelligence and surveillance capabilities, capturing foreign data, creating propaganda to undermine other nations’ national security, and violating human rights, and that these discussions about A.I. governance are going to be utilized to essentially enhance these abilities and enhance this effort. The United States is going to block such efforts. We’re going to safeguard American A.I. and chip technologies from theft and misuse, and we’re going to work with our allies and partners to strengthen and extend these protections and close pathways to adversaries attaining A.I. capabilities that essentially threaten all of our peoples. I would be remiss if I didn’t say that essentially A.I. can be used for incredible good, but it can also be used for incredible harm, and we need to think about A.I. governance in that mindset. So, we are committed to making sure that our A.I. is the gold standard and that we’re the partners of choice, and that as we go forward in trying to create A.I. governance models, we continue to have a private sector, multi-stakeholder-led approach that’s bottom-up, that essentially creates the innovation that we all need to address the challenges that we’re facing today. Thank you. Thank you.
Gabriela Ramos: Thank you so much, Jennifer. Mr. Chuen Hong Lew. Please, please, please, please.
Chuen Hong Lew: I try to sleep well at night. I think that’s very important. So, if all of you are not getting your seven hours, please do. I think it’s medically proven, but jokes aside, perhaps two thoughts. You know, there’s a famous saying. It was by Jack Welch, and he says, when the rate of change outside is greater than the rate of change inside, the end is near. And I think when I express this, the idea is the sheer speed of which A.I. is progressing, and the sheer unknown, and that uncertainty, I think probably keeps not just myself, but I think everybody in the audience here awake. There is probably a visceral sense that this thing is happening much faster, that we have the ability to either cope or to address, and I think that’s something that is always at the top of my mind, and I think at the top of the mind of a very small country like Singapore. But here is where there is a bit of an oxymoron. I’m not a Latin expert, but I’ve always liked this Latin term, and this Latin term is called festinilente, and it basically means make haste slowly. So, it’s about holding that conflicting thought at the back of your mind, that even as the frontier technology moves very quickly, and we must run fast, we must run fast, because I think it behooves all of us here. I’ve seen frontier technology come and go, but I think this is one of those things that will fundamentally reshape what society is, and therefore the ability to run fast, to maximise the opportunities, as well as to mitigate the risks, but do so deliberately, not to knee-jerk, not to be impulsive. So, I think the first thing at the back of my mind is festinilente, make haste slowly. The second thought which is related to this is how do we make sure that AI is harnessed in a human-centric way for the public good? Because at the end of the day, after we talk about AI, ultimately does it lead to human public good? On one end of the dystopian sort of spectrum, you can say AI is going to replace all of us here today, and I’m very sure at the back of all your minds there’s that little niggling doubt, but if you go to the other end of the utopian spectrum, there is a potential embarrassment of riches, where the AI is able to, as an agent, do everything from booking your travel programme to making a research for you, and that allows us to at least sleep 10 hours every night. But I think the answer obviously is somewhere in between, and within here, maybe the second thought at the back of my mind is talent, and talent writ large with a capital T, because if you really want for all of us here, be it small countries, large countries, to maximum the benefits as far as jobs are concerned, there is a fear that AI is going to replace a lot of entry-level jobs, for example. How do you transition? How do you train? These are not trivial questions. They may sound mundane, but they are extremely critical. How do you re-skill? How do you re-skill such that the rate of change of human potential is faster than the rate of change of the algorithm? I think that is critical to make sure that we maximise the potential, so talent, and talent when written small also means that we all know big tech companies are hoovering up talent, tens of millions of dollars as far as signing bonuses. How do we make sure that some of this talent, and we can attract this talent, to the larger purpose of understanding its broader impact, contributing to public service? For us to be able to keep at the forefront, to look big tech eye to eye, I need to know, and we need to know how AI is actually deployed. So I think the idea of talent, and how we groom that talent, how do we attract that talent, how do we harness that talent for public good, I think is probably the second thing at the back of my mind. Hope that is useful.
Gabriela Ramos: That is very, very useful, and I want to thank you all, because at the end, with very different perspectives, what we are hearing in this conversation is that we need to better fit ourselves to deal with AI. It is not about AI, as you said, I feel that we have seen the pace in which this is developing. You all mentioned the impact in labor markets and skills. At the end, we were thinking that it was going to be the routine works that were going to be replaced, and now we know that with generative AI, a higher level of functions are being endangered. We know the question of those that are lagging behind. We still have one third of the world that doesn’t have access to a stable internet. We know that there is this dual use, the misuse, the manipulation, the abuse, the intentional using of AI for producing harm, which we should not neglect, but we also know that the AI revolution is challenging the structures that we have in the governance fields as we know it, and at the end, I feel that one of the real questions is how much we invest in those governance structures in the very same, in the sovereign way in which each country wants to determine their own purposes, but at the end is this adaptability that we need to get together. But we are very pleased to have this very impressive group of speakers, because each one of you can show where do you put the emphasis? Do you put it in the investments, innovation? Do you put it in the multi-stakeholder? Do you put it on the frameworks? Do you put it on making sure that everybody benefits? And I feel that this is the kind of reflections that we want to have. And just to conclude, I don’t know if we need to see the results of the question or not. Do we have it? I’m calling on the technicians. No, maybe we will be looking into that when we, how to make it, no. We don’t have them, but we will look into it. In any case, I feel that we have a lot of very interesting comments from our speakers. From my perspective, it’s about incentives, it’s about investments, it’s about institutions. And I think that each one of you have contributed to these debates. And I just want to finish by thanking ITU, Doreen, and all of you for joining us in this conversation that will continue. Because as you mentioned, the pace is very fast, and we need to learn and mention also the question of research, understanding better, and the collaboration that we all need to bring our countries together to work to get it right. So thank you so much. And let’s continue with the other panels. Thank you so much.
Jennifer Bachus
Speech speed
156 words per minute
Speech length
742 words
Speech time
284 seconds
Multi-stakeholder process is essential for AI governance, driven by free markets and entrepreneurial spirit
Explanation
The US government believes AI governance must be a multi-stakeholder process, leveraging America’s strengths in free markets, world-class research institutions, and entrepreneurial spirit. This approach protects private sector innovation while sustaining America’s global AI leadership to promote human flourishing, economic competitiveness, and national security.
Evidence
President Trump signed executive orders to roll back previous AI regulations; the US has always been at the forefront of AI innovation driven by free markets and research institutions
Major discussion point
AI Governance Approaches and Regulatory Philosophy
Topics
Legal and regulatory | Economic
Agreed with
– Anne Bouverot
– Shan Zhongde
– Chuen Hong Lew
Agreed on
Multi-stakeholder approach is essential for AI governance
Excessive regulation could kill transformative AI industry and discourage necessary risk-taking for innovation
Explanation
Over-regulation of the AI sector could destroy a transformative industry just as it’s beginning to flourish. The administration argues that restricting AI development now would paralyze one of the most promising technologies in generations and prevent the risk-taking necessary for breakthrough achievements.
Evidence
AI will have revolutionary applications in economic innovation, job creation, national security, healthcare, and free expression; reports of governments tightening restrictions on US tech companies
Major discussion point
Innovation vs Regulation Balance
Topics
Legal and regulatory | Economic
Agreed with
– Anne Bouverot
– Chuen Hong Lew
Agreed on
Innovation should be prioritized over excessive regulation
Disagreed with
– Anne Bouverot
– Shan Zhongde
– Chuen Hong Lew
Disagreed on
Regulatory approach to AI governance
America wants partners but needs regulatory regimes that foster rather than strangle AI technology creation
Explanation
The US seeks international collaboration on AI development but requires that global regulatory frameworks support rather than hinder AI innovation. The administration will work to encourage pro-growth, deregulatory AI policies worldwide while ensuring American AI technology remains the gold standard.
Evidence
Concerns about reports of governments tightening screws on US tech companies; emphasis on creating trust through regulatory regimes that foster AI creation
Major discussion point
International Cooperation and Standards
Topics
Legal and regulatory | Economic
Agreed with
– Anne Bouverot
– Shan Zhongde
– Chuen Hong Lew
– Gabriela Ramos
Agreed on
International cooperation and standards are crucial for AI governance
Disagreed with
– Shan Zhongde
– Chuen Hong Lew
Disagreed on
Role of international cooperation and standards
Authoritarian regimes stealing AI for military, intelligence, surveillance and propaganda purposes threatens national security
Explanation
The US is concerned that authoritarian governments are stealing and misusing AI to strengthen their military and intelligence capabilities, capture foreign data, create propaganda to undermine other nations, and violate human rights. The US will block such efforts and work with allies to protect AI technologies from theft and misuse.
Evidence
AI can be used for incredible good but also incredible harm; need to safeguard American AI and chip technologies from theft and close pathways to adversaries
Major discussion point
Security and Misuse Concerns
Topics
Cybersecurity | Human rights
Disagreed with
– Shan Zhongde
Disagreed on
Security concerns and threat assessment
AI will have revolutionary applications in economic innovation, job creation, healthcare, and beyond
Explanation
The Trump administration believes AI will bring countless revolutionary applications across multiple sectors including economic innovation, job creation, national security, healthcare, and free expression. These applications are being demonstrated daily and represent the transformative potential of the technology.
Evidence
Examples being heard every day at the conference; AI applications span economic innovation, job creation, national security, healthcare, free expression and beyond
Major discussion point
Economic and Social Impact
Topics
Economic | Development
Agreed with
– Anne Bouverot
– Shan Zhongde
– Chuen Hong Lew
– Gabriela Ramos
Agreed on
AI has significant economic and social impacts that require attention
Anne Bouverot
Speech speed
144 words per minute
Speech length
982 words
Speech time
408 seconds
Focus should be on actions and practical outcomes rather than regulation, with emphasis on innovation over regulatory constraints
Explanation
The Paris AI Summit focused on practical actions and outcomes rather than regulation, bringing people together for discussions and announcements on making AI benefit people and societies. The approach emphasizes what can be done concretely to promote innovation and action rather than creating regulatory barriers.
Evidence
Paris AI Summit made announcements including 200 billion euros of investment in European AI champions, 400 million euros of commitments to AI foundation initiatives, and launched the sustainable AI coalition with close to 200 members
Major discussion point
Innovation vs Regulation Balance
Topics
Legal and regulatory | Economic
Agreed with
– Jennifer Bachus
– Chuen Hong Lew
Agreed on
Innovation should be prioritized over excessive regulation
Disagreed with
– Jennifer Bachus
– Shan Zhongde
– Chuen Hong Lew
Disagreed on
Regulatory approach to AI governance
Europe is shifting focus from regulation to innovation with investments in AI champions and practical initiatives
Explanation
While Europe is often associated with regulation rather than innovation, there is a changing focus toward putting stronger emphasis on innovation. This includes investments in research, AI factories, and gigafactories through the AI continent plan, without dismantling all regulation but rebalancing priorities.
Evidence
200 billion euros of investment by companies and European Commission in European AI champions; AI continent plan with investments in research and AI factories
Major discussion point
Innovation vs Regulation Balance
Topics
Economic | Legal and regulatory
Multi-stakeholder governance must include researchers, engineers, companies, governments, and civil society
Explanation
AI governance requires inclusive multi-stakeholder participation because AI stems from research and quickly impacts society. This necessitates involving researchers (including Nobel and Turing Prize winners), engineers, companies of all sizes, governments from various countries, and civil society organizations that address risks and promote common good.
Evidence
AI has research roots with very quick societal impacts; need for Nobel Prizes, Turing Prizes, economists, startups to large companies, small to large countries, and civil society expressing fears and developing AI for common good
Major discussion point
International Cooperation and Standards
Topics
Legal and regulatory | Sociocultural
Agreed with
– Jennifer Bachus
– Shan Zhongde
– Chuen Hong Lew
– Gabriela Ramos
Agreed on
International cooperation and standards are crucial for AI governance
Focus needed on job disruption mitigation through training, skilling, and upskilling programs
Explanation
AI governance should address the strong potential for job disruption by focusing on the best actions, which currently appear to be training, skilling, and upskilling people. This requires both individual country efforts and collective international action to manage the impact on employment.
Evidence
Strong potential for disruption on jobs; best actions seem to be training, skilling, upskilling; important role for governments on job impact
Major discussion point
Economic and Social Impact
Topics
Economic | Development
Agreed with
– Jennifer Bachus
– Shan Zhongde
– Chuen Hong Lew
– Gabriela Ramos
Agreed on
AI has significant economic and social impacts that require attention
Shan Zhongde
Speech speed
114 words per minute
Speech length
689 words
Speech time
360 seconds
Balance between development and safety is crucial, requiring people-centered AI that is traceable, reliable and monitorable
Explanation
China emphasizes the need to balance AI development with safety considerations, promoting people-centered AI that is traceable, reliable, and monitorable. This approach involves different classifications and control mechanisms while promoting innovation and universal development to ensure AI benefits human society.
Evidence
President Xi’s emphasis on AI importance; focus on green sustainable development and intelligent industrialization; DeepSeek as example of large model development; setting up guidelines, standards, and risk prevention mechanisms
Major discussion point
AI Governance Approaches and Regulatory Philosophy
Topics
Legal and regulatory | Human rights
Disagreed with
– Jennifer Bachus
– Anne Bouverot
– Chuen Hong Lew
Disagreed on
Regulatory approach to AI governance
International collaboration through ITU and working together on global AI standards is essential
Explanation
China advocates for reinforcing international collaboration based on ITU frameworks and proactively sharing AI development strategies. The country supports working together to promote international AI standards and construct an international standards framework for open source, emphasizing open-minded and inclusive approaches.
Evidence
Emphasis on working with ITU; sharing China’s AI development strategies; promoting international AI standards; open-minded and inclusive approach to international standards framework for open source
Major discussion point
International Cooperation and Standards
Topics
Legal and regulatory | Infrastructure
Agreed with
– Jennifer Bachus
– Anne Bouverot
– Chuen Hong Lew
– Gabriela Ramos
Agreed on
International cooperation and standards are crucial for AI governance
Disagreed with
– Jennifer Bachus
– Chuen Hong Lew
Disagreed on
Role of international cooperation and standards
AI brings risks including data leakage, model hallucinations, and social structural changes that need monitoring
Explanation
AI governance must address various risks brought by AI technology, including data leakage, model hallucinations, AI activities and consequences, relationship problems between people, job impacts, and social structural changes. These risks require comprehensive monitoring and control mechanisms.
Evidence
Examples of specific risks: data leakage, model hallucinations, AI activities and consequences, job impacts, social structural changes
Major discussion point
Security and Misuse Concerns
Topics
Cybersecurity | Human rights
AI governance should promote economic sustainable development and universal benefits
Explanation
China’s approach focuses on promoting productivity, product quality, and green sustainable development across industries like petrochemical, steel, and healthcare. The goal is to ensure AI contributes to economic sustainable development while providing universal benefits and preventing an intelligence divide between countries.
Evidence
Applications in manufacturing, petrochemical, steel, healthcare sectors; focus on productivity and quality improvements; concern about intelligence divide increasing development gaps between countries
Major discussion point
Economic and Social Impact
Topics
Economic | Development
Agreed with
– Jennifer Bachus
– Anne Bouverot
– Chuen Hong Lew
– Gabriela Ramos
Agreed on
AI has significant economic and social impacts that require attention
Chuen Hong Lew
Speech speed
178 words per minute
Speech length
1578 words
Speech time
529 seconds
Trust-building through guardrails enables maximum innovation space, requiring science-based and evidence-based approaches
Explanation
Singapore focuses on building a trusted ecosystem rather than regulation per se, believing that appropriate guardrails remove barriers and create maximum space for AI utilization and adoption. This approach requires science-based and evidence-based methods, including investing in digital trust centers and participating in AI Safety Institute networks.
Evidence
Investment in digital trust center; participation in founding AI Safety Institute network; Singapore Consensus with 100+ scientists from east and west; hands-on experience building AI models
Major discussion point
AI Governance Approaches and Regulatory Philosophy
Topics
Legal and regulatory | Infrastructure
Disagreed with
– Jennifer Bachus
– Anne Bouverot
– Shan Zhongde
Disagreed on
Regulatory approach to AI governance
Light-touch regulation requires heavy lifting to build entire ecosystems, not just laws
Explanation
Effective light-touch regulation demands extensive effort to build comprehensive ecosystems beyond just legal frameworks. This involves science-based approaches, close industry collaboration, and continuous iteration through sandboxes and practical guidelines that keep pace with rapidly evolving technology.
Evidence
AI Verified Open Source Foundation; Project Boonshot open source toolkit; practical guidelines from sandbox iterations; working with model deployers, app developers, third-party testers
Major discussion point
Innovation vs Regulation Balance
Topics
Legal and regulatory | Infrastructure
Agreed with
– Jennifer Bachus
– Anne Bouverot
Agreed on
Innovation should be prioritized over excessive regulation
Global ecosystem should not fragment, requiring standards building and working with like-minded partners
Explanation
Singapore advocates against fragmentation of the global AI ecosystem, emphasizing the need for standards building and collaboration with like-minded partners. This includes leading efforts in international standards organizations and working through regional and international partnerships to bring everyone along.
Evidence
Leading SC42 work effort; ASEAN work group on AI leadership; Digital Forum of Small States (FOSS) with 108 small countries; capacity building efforts
Major discussion point
International Cooperation and Standards
Topics
Legal and regulatory | Infrastructure
Agreed with
– Jennifer Bachus
– Anne Bouverot
– Shan Zhongde
– Gabriela Ramos
Agreed on
International cooperation and standards are crucial for AI governance
Disagreed with
– Jennifer Bachus
– Shan Zhongde
Disagreed on
Role of international cooperation and standards
Rate of AI change exceeds ability to cope, requiring deliberate action despite rapid progress
Explanation
The speed of AI development creates uncertainty and challenges that exceed current coping abilities, particularly for small countries. This requires balancing the need to move fast with deliberate, non-impulsive action, following the principle of ‘make haste slowly’ to maximize opportunities while mitigating risks.
Evidence
Jack Welch quote about rate of change; visceral sense that AI is happening faster than ability to cope; concept of ‘festina lente’ (make haste slowly)
Major discussion point
Speed of AI Development Challenges
Topics
Legal and regulatory | Development
Talent development and retention is critical for maximizing AI benefits while ensuring human-centric public good
Explanation
Talent development is essential for ensuring AI serves human-centric public good, requiring focus on training, re-skilling, and attracting talent to public service. The challenge includes competing with big tech companies for talent while ensuring the rate of human potential development exceeds algorithmic advancement.
Evidence
Big tech companies offering tens of millions in signing bonuses; need to transition and train for entry-level job displacement; importance of attracting talent to public service to understand AI deployment
Major discussion point
Economic and Social Impact
Topics
Economic | Development
Agreed with
– Jennifer Bachus
– Anne Bouverot
– Shan Zhongde
– Gabriela Ramos
Agreed on
AI has significant economic and social impacts that require attention
Gabriela Ramos
Speech speed
144 words per minute
Speech length
1478 words
Speech time
615 seconds
AI governance requires understanding policy outcomes and incentivizing the right approaches through comprehensive frameworks
Explanation
Effective AI governance is not just about understanding technologies but understanding how policies translate to outcomes and creating proper incentives. This involves working with frameworks like UNESCO’s recommendation on the ethics of artificial intelligence to dissect discussions and see all elements that bring stakeholders together.
Evidence
Working with UNESCO on the recommendation on the ethics of artificial intelligence; emphasis on understanding policy to outcomes and incentivizing issues
Major discussion point
AI Governance Approaches and Regulatory Philosophy
Topics
Legal and regulatory | Human rights
The regulation vs innovation debate is a false dilemma that should focus on finding the right balance
Explanation
Rather than viewing regulation and innovation as opposing forces, the focus should be on finding appropriate limits based on societal choices. The key is avoiding ‘over-regulation’ while ensuring proper governance structures that enable innovation to flourish.
Evidence
Emphasis on the term ‘over-regulation’ as qualifying the whole debate; reference to societal choices about where to put limits
Major discussion point
Innovation vs Regulation Balance
Topics
Legal and regulatory | Economic
AI revolution challenges existing governance structures and requires investment in adaptability
Explanation
The AI revolution is fundamentally challenging the governance structures as we know them, requiring significant investment in adapting these structures. Countries need to determine their own purposes while building adaptability to work together effectively in this rapidly changing landscape.
Evidence
One third of the world doesn’t have access to stable internet; higher level functions being endangered by generative AI; dual use and misuse concerns
Major discussion point
Speed of AI Development Challenges
Topics
Legal and regulatory | Development
Comprehensive approach needed addressing labor market impacts, digital divides, and misuse concerns
Explanation
AI governance must address multiple interconnected challenges including unexpected impacts on higher-level jobs from generative AI, the digital divide affecting one-third of the world without stable internet, and the dual-use nature of AI that can be used for harm. These issues require coordinated responses that go beyond traditional approaches.
Evidence
Generative AI affecting higher-level functions rather than just routine work; one-third of world lacks stable internet access; concerns about manipulation, abuse, and intentional harm using AI
Major discussion point
Economic and Social Impact
Topics
Economic | Development | Human rights
Agreed with
– Jennifer Bachus
– Anne Bouverot
– Shan Zhongde
– Chuen Hong Lew
Agreed on
AI has significant economic and social impacts that require attention
International cooperation essential for managing global nature of AI technologies
Explanation
Given that AI technologies are inherently global, effective governance requires international engagement and cooperation even when different regions like Europe, Japan, and China have different approaches. This cooperation must happen within multilateral frameworks while respecting different national priorities.
Evidence
Recognition that technologies are global while different areas have different approaches; emphasis on working under ITU framework; need for countries to work together
Major discussion point
International Cooperation and Standards
Topics
Legal and regulatory | Infrastructure
Agreed with
– Jennifer Bachus
– Anne Bouverot
– Shan Zhongde
– Chuen Hong Lew
Agreed on
International cooperation and standards are crucial for AI governance
Agreements
Agreement points
Multi-stakeholder approach is essential for AI governance
Speakers
– Jennifer Bachus
– Anne Bouverot
– Shan Zhongde
– Chuen Hong Lew
Arguments
Multi-stakeholder process is essential for AI governance, driven by free markets and entrepreneurial spirit
Multi-stakeholder governance must include researchers, engineers, companies, governments, and civil society
International collaboration through ITU and working together on global AI standards is essential
Global ecosystem should not fragment, requiring standards building and working with like-minded partners
Summary
All speakers agree that AI governance requires inclusive participation from multiple stakeholders including governments, private sector, researchers, civil society, and international organizations. They emphasize the need for collaborative approaches rather than top-down regulatory frameworks.
Topics
Legal and regulatory | Infrastructure
Innovation should be prioritized over excessive regulation
Speakers
– Jennifer Bachus
– Anne Bouverot
– Chuen Hong Lew
Arguments
Excessive regulation could kill transformative AI industry and discourage necessary risk-taking for innovation
Focus should be on actions and practical outcomes rather than regulation, with emphasis on innovation over regulatory constraints
Light-touch regulation requires heavy lifting to build entire ecosystems, not just laws
Summary
These speakers share the view that over-regulation poses a significant threat to AI innovation and that governance approaches should prioritize enabling innovation while maintaining appropriate safeguards.
Topics
Legal and regulatory | Economic
International cooperation and standards are crucial for AI governance
Speakers
– Jennifer Bachus
– Anne Bouverot
– Shan Zhongde
– Chuen Hong Lew
– Gabriela Ramos
Arguments
America wants partners but needs regulatory regimes that foster rather than strangle AI technology creation
Multi-stakeholder governance must include researchers, engineers, companies, governments, and civil society
International collaboration through ITU and working together on global AI standards is essential
Global ecosystem should not fragment, requiring standards building and working with like-minded partners
International cooperation essential for managing global nature of AI technologies
Summary
All speakers recognize that AI’s global nature requires international cooperation and coordination, though they may differ on specific approaches. They agree on the need for shared standards and collaborative frameworks.
Topics
Legal and regulatory | Infrastructure
AI has significant economic and social impacts that require attention
Speakers
– Jennifer Bachus
– Anne Bouverot
– Shan Zhongde
– Chuen Hong Lew
– Gabriela Ramos
Arguments
AI will have revolutionary applications in economic innovation, job creation, healthcare, and beyond
Focus needed on job disruption mitigation through training, skilling, and upskilling programs
AI governance should promote economic sustainable development and universal benefits
Talent development and retention is critical for maximizing AI benefits while ensuring human-centric public good
Comprehensive approach needed addressing labor market impacts, digital divides, and misuse concerns
Summary
All speakers acknowledge that AI will have profound impacts on jobs, economy, and society, requiring proactive measures to manage transitions and ensure benefits are widely distributed.
Topics
Economic | Development
Similar viewpoints
Both speakers emphasize the need to move away from regulatory-heavy approaches toward innovation-focused strategies, with Anne noting Europe’s shift in this direction aligning with the US position.
Speakers
– Jennifer Bachus
– Anne Bouverot
Arguments
Excessive regulation could kill transformative AI industry and discourage necessary risk-taking for innovation
Europe is shifting focus from regulation to innovation with investments in AI champions and practical initiatives
Topics
Legal and regulatory | Economic
Both speakers advocate for balanced approaches that enable innovation while maintaining safety and trust through appropriate monitoring and evidence-based frameworks.
Speakers
– Shan Zhongde
– Chuen Hong Lew
Arguments
Balance between development and safety is crucial, requiring people-centered AI that is traceable, reliable and monitorable
Trust-building through guardrails enables maximum innovation space, requiring science-based and evidence-based approaches
Topics
Legal and regulatory | Human rights
Both speakers prefer practical, action-oriented approaches over traditional regulatory frameworks, emphasizing the complexity of creating effective governance ecosystems.
Speakers
– Anne Bouverot
– Chuen Hong Lew
Arguments
Focus should be on actions and practical outcomes rather than regulation, with emphasis on innovation over regulatory constraints
Light-touch regulation requires heavy lifting to build entire ecosystems, not just laws
Topics
Legal and regulatory | Infrastructure
Unexpected consensus
Shared concern about AI development pace exceeding governance capabilities
Speakers
– Chuen Hong Lew
– Gabriela Ramos
Arguments
Rate of AI change exceeds ability to cope, requiring deliberate action despite rapid progress
AI revolution challenges existing governance structures and requires investment in adaptability
Explanation
Despite representing different perspectives (Singapore’s tech-forward approach vs. UNESCO’s ethics-focused framework), both speakers acknowledge the fundamental challenge that AI development is outpacing institutional capacity to govern it effectively.
Topics
Legal and regulatory | Development
Recognition of AI’s dual-use nature and security concerns
Speakers
– Jennifer Bachus
– Shan Zhongde
Arguments
Authoritarian regimes stealing AI for military, intelligence, surveillance and propaganda purposes threatens national security
AI brings risks including data leakage, model hallucinations, and social structural changes that need monitoring
Explanation
Despite representing potentially competing geopolitical interests (US and China), both speakers acknowledge legitimate security concerns around AI misuse, though they frame the threats differently.
Topics
Cybersecurity | Human rights
Emphasis on human-centric AI development
Speakers
– Shan Zhongde
– Chuen Hong Lew
Arguments
Balance between development and safety is crucial, requiring people-centered AI that is traceable, reliable and monitorable
Talent development and retention is critical for maximizing AI benefits while ensuring human-centric public good
Explanation
Both China and Singapore, despite different governance systems, emphasize human-centric approaches to AI development, suggesting convergence on fundamental values around AI serving human welfare.
Topics
Human rights | Development
Overall assessment
Summary
The speakers demonstrated remarkable consensus on key principles including the need for multi-stakeholder governance, international cooperation, innovation-focused approaches over heavy regulation, and attention to AI’s economic and social impacts. Despite representing different countries and governance philosophies, they converged on fundamental challenges and approaches.
Consensus level
High level of consensus on principles and challenges, with differences mainly in emphasis and implementation approaches rather than fundamental disagreements. This suggests potential for effective international cooperation on AI governance frameworks, though specific policy details may require further negotiation. The consensus indicates a maturing global understanding of AI governance needs that transcends traditional geopolitical divisions.
Differences
Different viewpoints
Regulatory approach to AI governance
Speakers
– Jennifer Bachus
– Anne Bouverot
– Shan Zhongde
– Chuen Hong Lew
Arguments
Excessive regulation could kill transformative AI industry and discourage necessary risk-taking for innovation
Focus should be on actions and practical outcomes rather than regulation, with emphasis on innovation over regulatory constraints
Balance between development and safety is crucial, requiring people-centered AI that is traceable, reliable and monitorable
Trust-building through guardrails enables maximum innovation space, requiring science-based and evidence-based approaches
Summary
The US strongly opposes regulation and advocates for deregulation, while China emphasizes balanced approach with monitoring and control mechanisms. France/Europe focuses on practical outcomes over regulation but acknowledges some regulatory framework. Singapore advocates for light-touch regulation through trust-building guardrails.
Topics
Legal and regulatory | Economic
Security concerns and threat assessment
Speakers
– Jennifer Bachus
– Shan Zhongde
Arguments
Authoritarian regimes stealing AI for military, intelligence, surveillance and propaganda purposes threatens national security
International collaboration through ITU and working together on global AI standards is essential
Summary
The US explicitly identifies authoritarian regimes as threats that steal AI technology for harmful purposes, while China advocates for open international collaboration and standards-setting, representing fundamentally different security perspectives.
Topics
Cybersecurity | Legal and regulatory
Role of international cooperation and standards
Speakers
– Jennifer Bachus
– Shan Zhongde
– Chuen Hong Lew
Arguments
America wants partners but needs regulatory regimes that foster rather than strangle AI technology creation
International collaboration through ITU and working together on global AI standards is essential
Global ecosystem should not fragment, requiring standards building and working with like-minded partners
Summary
The US wants cooperation but on its terms with pro-growth, deregulatory policies, while China emphasizes inclusive global collaboration through ITU. Singapore focuses on preventing fragmentation through standards but works with ‘like-minded partners’, suggesting selective cooperation.
Topics
Legal and regulatory | Infrastructure
Unexpected differences
Open source AI development approach
Speakers
– Jennifer Bachus
– Shan Zhongde
– Chuen Hong Lew
Arguments
America wants partners but needs regulatory regimes that foster rather than strangle AI technology creation
International collaboration through ITU and working together on global AI standards is essential
Global ecosystem should not fragment, requiring standards building and working with like-minded partners
Explanation
While China explicitly promotes open source governance and development (mentioning DeepSeek as an example) and Singapore has an open source foundation, the US position focuses on protecting American AI technology from theft and misuse, creating an unexpected tension around openness versus protection in AI development.
Topics
Legal and regulatory | Infrastructure
Speed of AI development response
Speakers
– Jennifer Bachus
– Chuen Hong Lew
Arguments
Excessive regulation could kill transformative AI industry and discourage necessary risk-taking for innovation
Rate of AI change exceeds ability to cope, requiring deliberate action despite rapid progress
Explanation
Unexpectedly, while both acknowledge the rapid pace of AI development, they draw opposite conclusions. The US uses speed as an argument against regulation to avoid stifling innovation, while Singapore uses the same speed concern to argue for more deliberate, careful action following ‘make haste slowly’ principle.
Topics
Legal and regulatory | Development
Overall assessment
Summary
The discussion reveals significant philosophical differences in AI governance approaches, with the US advocating for minimal regulation and market-driven solutions, China promoting balanced development with monitoring mechanisms, and smaller countries like Singapore seeking middle-ground approaches through trust-building and light-touch regulation.
Disagreement level
Moderate to high disagreement level with important implications for global AI governance. The fundamental tension between the US anti-regulatory stance and other countries’ more structured approaches, combined with security concerns and different views on international cooperation, suggests that achieving unified global AI governance will be challenging and may require compromise or parallel frameworks.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasize the need to move away from regulatory-heavy approaches toward innovation-focused strategies, with Anne noting Europe’s shift in this direction aligning with the US position.
Speakers
– Jennifer Bachus
– Anne Bouverot
Arguments
Excessive regulation could kill transformative AI industry and discourage necessary risk-taking for innovation
Europe is shifting focus from regulation to innovation with investments in AI champions and practical initiatives
Topics
Legal and regulatory | Economic
Both speakers advocate for balanced approaches that enable innovation while maintaining safety and trust through appropriate monitoring and evidence-based frameworks.
Speakers
– Shan Zhongde
– Chuen Hong Lew
Arguments
Balance between development and safety is crucial, requiring people-centered AI that is traceable, reliable and monitorable
Trust-building through guardrails enables maximum innovation space, requiring science-based and evidence-based approaches
Topics
Legal and regulatory | Human rights
Both speakers prefer practical, action-oriented approaches over traditional regulatory frameworks, emphasizing the complexity of creating effective governance ecosystems.
Speakers
– Anne Bouverot
– Chuen Hong Lew
Arguments
Focus should be on actions and practical outcomes rather than regulation, with emphasis on innovation over regulatory constraints
Light-touch regulation requires heavy lifting to build entire ecosystems, not just laws
Topics
Legal and regulatory | Infrastructure
Takeaways
Key takeaways
AI governance requires a multi-stakeholder approach involving governments, private sector, researchers, civil society, and international organizations
There is broad consensus that excessive regulation could stifle AI innovation, with preference for light-touch, trust-building approaches over heavy regulatory frameworks
International cooperation and standardization are essential given AI’s global nature, despite different national approaches
The rapid pace of AI development creates challenges for governance structures, requiring adaptive and evidence-based policy approaches
Talent development, reskilling, and addressing job displacement are critical priorities for maximizing AI benefits while mitigating social disruption
Security concerns about AI misuse by authoritarian regimes for surveillance, military, and propaganda purposes need to be addressed
Focus should be on practical outcomes and actions rather than theoretical regulatory frameworks
Small and developing countries need capacity building and inclusive approaches to prevent an ‘intelligence divide’
Resolutions and action items
Continue international collaboration through ITU and other multilateral frameworks for AI standards development
Invest in talent development, training, and reskilling programs to address job displacement concerns
Develop open-source toolkits and capacity building initiatives for smaller countries
Maintain science-based and evidence-based approaches to AI governance that can adapt to rapid technological changes
Strengthen partnerships between like-minded countries while protecting AI technologies from misuse by adversaries
Unresolved issues
How to effectively balance innovation promotion with necessary safety guardrails across different national contexts
Specific mechanisms for preventing AI technology theft and misuse by authoritarian regimes
Concrete strategies for addressing the ‘intelligence divide’ between developed and developing nations
Detailed approaches for managing AI’s impact on labor markets and social structures
How to maintain multi-stakeholder governance while addressing legitimate national security concerns
Practical implementation of international AI standards given different regulatory philosophies
Suggested compromises
Focus on ‘light-touch’ regulation that builds trust through targeted guardrails rather than comprehensive restrictions
Emphasize practical actions and outcomes over theoretical regulatory frameworks
Pursue innovation-focused policies while maintaining necessary safety measures
Develop inclusive governance models that accommodate different national approaches while promoting international cooperation
Balance private sector leadership with appropriate government oversight and civil society input
Thought provoking comments
I’m very pleased that you used the word over-regulation because that qualifies the whole debate… this is where we want to put the current debate
Speaker
Gabriela Ramos
Reason
This comment reframes the entire discussion by distinguishing between ‘regulation’ and ‘over-regulation,’ suggesting that the debate isn’t about whether to regulate AI but about finding the right balance. It acknowledges that some regulation may be necessary while avoiding stifling innovation.
Impact
This comment shifted the conversation away from a binary regulation vs. innovation debate toward a more nuanced discussion about appropriate governance levels. It allowed subsequent speakers to move beyond defensive positions and engage with the complexity of balancing innovation with necessary safeguards.
For us, perhaps it’s not regulation per se, but if I may posit, it’s actually trust, because at the core of it… building a trusted ecosystem is at the core
Speaker
Chuen Hong Lew
Reason
This insight fundamentally reframes the governance challenge from a regulatory compliance issue to a trust-building exercise. It suggests that the goal isn’t regulation for its own sake, but creating conditions where AI can be deployed with confidence and social acceptance.
Impact
This comment elevated the discussion to a more philosophical level, moving beyond technical regulatory approaches to consider the underlying social contract needed for AI adoption. It influenced how other panelists framed their responses, with several subsequently emphasizing collaboration and multi-stakeholder approaches.
Light touch actually requires extremely heavy lifting and the heavy lifting here is because it has to be viewed as building an entire ecosystem
Speaker
Chuen Hong Lew
Reason
This paradoxical statement reveals the complexity behind seemingly simple regulatory approaches. It challenges the assumption that ‘light touch’ regulation is easier or less comprehensive, instead suggesting it requires more sophisticated, systemic thinking.
Impact
This comment deepened the conversation by revealing the hidden complexity in governance approaches. It helped explain why effective AI governance requires coordination across multiple domains (science, industry, standards, international cooperation) rather than simple rule-making.
What is AI? AI is really something… that has research roots and very quick societal impacts… we need to involve in the global governance discussions researchers… engineers… companies… governments… civil society
Speaker
Anne Bouverot
Reason
This comment provides a foundational analysis of why AI governance is uniquely challenging – because it rapidly transitions from academic research to societal impact. It logically derives the need for multi-stakeholder governance from the nature of AI itself.
Impact
This insight provided intellectual grounding for the multi-stakeholder approach that all panelists endorsed. It moved the discussion from advocating for inclusion to explaining why inclusion is structurally necessary for AI governance, making the argument more compelling.
When the rate of change outside is greater than the rate of change inside, the end is near… festina lente… make haste slowly
Speaker
Chuen Hong Lew
Reason
This comment captures the central tension in AI governance – the need to move quickly enough to keep pace with technological development while being deliberate enough to avoid mistakes. The Latin phrase ‘festina lente’ elegantly encapsulates this paradox.
Impact
This philosophical insight resonated throughout the discussion, providing a framework for understanding why AI governance feels so challenging. It validated the urgency felt by all participants while advocating for thoughtful approaches, helping to reconcile the apparent contradiction between speed and deliberation.
How do you re-skill such that the rate of change of human potential is faster than the rate of change of the algorithm?
Speaker
Chuen Hong Lew
Reason
This question reframes the jobs/displacement challenge in mathematical terms, suggesting that the solution isn’t to slow down AI development but to accelerate human adaptation. It’s a profound reconceptualization of the human-AI relationship.
Impact
This insight shifted the discussion from defensive concerns about job displacement to proactive thinking about human development. It influenced the moderator’s closing remarks about the need to ‘better fit ourselves to deal with AI’ rather than constraining AI to fit existing structures.
Overall assessment
These key comments fundamentally elevated the discussion from a typical regulatory debate to a sophisticated exploration of governance philosophy and systemic challenges. Rather than participants defending predetermined positions, the insightful comments created space for nuanced thinking about trust, ecosystem building, and the unique characteristics of AI that demand new governance approaches. The discussion evolved from binary thinking (regulation vs. innovation) to complex, multi-dimensional analysis of how societies can adapt to transformative technology. The most impactful comments were those that provided new conceptual frameworks – like trust over regulation, ecosystem thinking, and the paradox of making haste slowly – that allowed all participants to engage more thoughtfully with the challenges rather than simply advocating for their national positions.
Follow-up questions
How to balance AI development and safety in the face of rapid technological advancement
Speaker
Shan Zhongde
Explanation
The Vice Minister highlighted this as a critical governance challenge, noting the urgency of coordinated development while managing various risks including data leakage, model hallucinations, and social structural changes
How to address the AI intelligence divide that will increase development gaps between countries
Speaker
Shan Zhongde
Explanation
This was identified as a key concern requiring global cooperation to ensure equitable and inclusive AI governance models
How to ensure the rate of change of human potential is faster than the rate of change of algorithms
Speaker
Chuen Hong Lew
Explanation
This addresses the critical challenge of human adaptation to AI advancement, particularly in terms of reskilling and training to prevent job displacement
How to attract and retain AI talent for public service when big tech companies are offering substantial compensation packages
Speaker
Chuen Hong Lew
Explanation
This is essential for governments to maintain oversight capabilities and develop AI governance frameworks that can effectively regulate the technology
How to transition and retrain workers whose jobs may be displaced by AI, particularly entry-level positions
Speaker
Chuen Hong Lew
Explanation
This addresses the practical implementation challenges of managing AI’s impact on employment and ensuring human-centric AI development
How to prevent authoritarian regimes from using AI to strengthen military, intelligence and surveillance capabilities
Speaker
Jennifer Bachus
Explanation
This represents a national security concern about the dual-use nature of AI technology and the need for protective measures
How to develop practical testing and benchmarking standards for AI models that bridge scientific research with real-world deployment challenges
Speaker
Chuen Hong Lew
Explanation
This addresses the gap between theoretical AI safety research and practical implementation needs for industry deployers
How to ensure AI governance remains inclusive and multi-stakeholder while managing the rapid pace of technological change
Speaker
Anne Bouverot
Explanation
This addresses the challenge of maintaining broad participation in governance discussions when the technology is evolving faster than traditional policy-making processes
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
