State of play of major global AI Governance processes

29 May 2024 14:30h - 15:15h

Table of contents

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Full session report

Global AI Governance Takes Centre Stage as International Experts Convene for Inclusive Framework Development

At a significant panel discussion on the state of play of major global AI governance processes, Dr. Ebtesam Almazrouei, a renowned AI expert, moderated a session with prominent international figures. The panelists included His Excellency Hiroshi Yoshida from Japan, Thomas Schneider from Switzerland, His Excellency Shan Zhongde from China, His Excellency Do-Hyun Kan from Korea, Helen Davidson from the USA, and Yuhua Haikila from the European Commission.

Ambassador Thomas Schneider highlighted the importance of context-based AI regulation, emphasizing the need for a global understanding of risks and impacts rather than focusing solely on the technology. He proudly discussed the Council of Europe treaty, which aims to ensure that existing human rights and democratic principles are upheld in the context of AI use, inviting global participation in this initiative.

Juha Heikkilä from the European Commission elaborated on the EU AI Act, set to be the first comprehensive, legally binding regulation of AI. He detailed the phased implementation of the Act, designed to ensure a strong pre- and post-market enforcement system, and introduced the European AI Office, which will play a crucial role in coordinating and supervising AI regulation across the EU.

Alan Davidson from the United States discussed the US approach to AI governance, which includes voluntary commitments from AI companies, a comprehensive AI executive order, and the establishment of the US AI Safety Institute. He stressed the need for bipartisan legislation to further ensure AI safety and trust.

Shan Zhongde from China shared the country’s commitment to AI ethics and governance, outlining China’s efforts to implement a human-centered approach and practical measures to mitigate AI risks. He also extended an invitation to the World Conference on AI in Shanghai, emphasizing China’s focus on AI for good.

Hiroshi Yoshida from Japan discussed the country’s active role in international AI governance, including the Hiroshima AI process launched at the G7 in 2023. He stressed the importance of interoperable governance frameworks that allow for different implementation approaches while maintaining a common understanding of necessary actions.

Do-Hyun Kan from Korea provided an update on the AI Safety Summit, affirming the validity of the goals set during the UK Safety Summit and highlighting Korea’s commitment to developing specific AI safety standards and promoting inclusivity in AI governance.

The panelists agreed on the importance of collaboration and cooperation to develop inclusive AI governance frameworks aimed at harnessing AI for good. They shared a common vision of leveraging AI to benefit humanity and achieve the Sustainable Development Goals, each bringing unique perspectives and initiatives from their respective countries and regions. The consensus was on the need for interoperability, sharing of best practices, and a multi-stakeholder approach to ensure that AI serves the common good globally.

Session transcript

Introduction:
So, I would like to invite up the next panel now, which is the state of play of major global AI governance processes. And we have a moderator, Ebtusan Almazroui. Dr. Almazroui is the founder and CEO of AIE3 and is recognized for her pioneering AI Falcon models including the Middle East’s first open source LLM Falcon 0B and the world’s most powerful open AI model Falcon 180B in 2023. Please welcome her to the stage. I’ll quickly introduce the other panelists. Please welcome them to the stage as I do so. His Excellency Hiroshi Yoshida serves as Vice Minister for Policy Coordination at the Ministry of International Affairs and Communications Japan since 2022. Thomas Schneider is Ambassador and Director of International Affairs at the Swiss Federal Office of Communications in the Federal Department of the Environment, Transport, Energy and Communications. His Excellency Shan Zhongge serves as Vice Minister for the Ministry of Industry and Information Technology of the People’s Republic of China. His Excellency Do-Hyun Kan serves as Vice Minister of the Head of the Office of ICT Policy at the Ministry of Science and ICT of the Republic of Korea. Helen Davidson is the Assistant Secretary of Commerce for Communications and Information and National Telecommunications and Information Administration. And lastly, Yuhua Haikila. is advisor for AI in the European Commission.

Ebtesam Almazrouei:
Your excellencies, respected guests, the Secretary General of ITU, thank you for organizing the first UN Governance Day. It’s a crucial moment to gather all to discuss an important topic, which is the AI governance. Today in our morning discussions, we discuss how we can implement AI in a most secure, inclusive, and trustable way. What is exactly the landscape of AI and how it will evolve? In today’s session, I would like to welcome your excellencies, respected guests from European Council, from China, from also the United States of America, from Korea and Japan, and also European Union to discuss the different AI activities and the government regulation and frameworks that your countries and governments, they already put a lot of effort in shaping these rules. So, first of all, I would like to start with Ambassador Thomas Schneider. You have successfully convened a group of countries to sign a treaty in a pluralized world. What are the challenges did you encounter during this process? And also, if you can also inform us which parts of the treaty are you most proud of?

Thomas Schneider:
Thank you, and thanks for convening this session. Before I go to the treaty itself, I would like to give a little bit of an outline what role, in my view, the treaty should play in the bigger governance setting. Because there are many people that ask for one new or established institution to solve all the problems, one law to be created that will solve all the problems, normally politician and media like this. But if you look, for instance, at how engines are regulated, we don’t have one UN convention on engines and everything, or not nationally. We have hundreds and thousands of technical, legal, and sociocultural norms that regulate mostly not the engine itself, but the vehicles and the machines that are using engines. We regulate them in different contexts. We regulate the people that are driving the engines. We regulate the infrastructure. We regulate or protect people affected. But all context-based is not the engine. It’s the function of the engine, the effect of the tool. And I think we have to, and there’s different levels of harmonization. We allow people in the UK to drive on the other side of the road, even allow them to drive here. It more or less works. On aviation, it’s probably difficult if planes land from different directions on the same airport. So there’s different level of harmonization with engine regulation. And I think the same logic should be applied to AI. It should be context-based to whatever we can. It should be about risks and impacts, not the technology itself. And that is specific to every culture in many ways, to economic incentives and so on. At the same time, we need a common understanding about what we try to achieve. We need to have a global discussion about how we deal with risks, what the risks are, what are we trying to protect. And so we need a coherent approach, but not necessarily one institution, but hundreds and thousands of pieces that work together. And the Council of Europe treaty was drafted not in a spirit to create new human rights, not to reinvent the wheel. But actually, to make sure that the existing human rights and protections for democracy and rule of law are applied in the context where AI is used. And it was set up also not as a European process. The Council of Europe, by the way, is not the European Union. It’s an institution like the UN of Europe with 46 member states. We had 57 states negotiating in the beginning. It’s open for anyone that cares about human rights, democracy, and rule of law. Every country can join. It’s trying to fulfill one particular piece in this clockwork of methods to make sure that human rights, democracy, and rule of law are protected when using AI. And, at the same time, allow for innovation. And this, of course, is not an easy thing. First of all, to bridge institutional differences between these different countries and cultures and regions of the world that were cooperating and hopefully will be cooperating. That’s one of the key challenges that we faced. And also how to make sure that we protect existing rights in a dynamic way, not in a bureaucratic way, so that this instrument is fit for purpose now, but also in the future. And I think this is why it’s so helpful to have discussions like this. Because we need dialogue. We need dialogue here at the Internet Governance Forum to understand what are the challenges. How do they differ in different contexts in different regions? How do we agree on a shared vision? How can we create a mechanism like a Swiss clockwork with different tools that feed into each other that it shows the right time and not too fast, not too quick, and it doesn’t break down? Thank you.

Ebtesam Almazrouei:
Thank you, Ambassador. My question now will go to Yoha. So, with the implementation of the AI Act, and before two weeks there is a release that it will be into action the next coming month. Could you share how this directive will be translated into practice? And how you can measure the success of the Act? the European AI Act and whether it is really achieving its goals.

Juha Heikkila:
Thank you very much, and thank you very much indeed for the invitation to be on this panel. So indeed the European Union AI Act is the first comprehensive horizontal and legally binding regulation globally, and it will apply to both public and private providers of AI applications equally, which makes it very different from other governments’ attempts and efforts elsewhere. Technically speaking, it’s a regulation for those legal eagles who are interested in EU law. That means that it applies equally in all the 27 member states of the European Union. I would like to first point out that it becomes applicable in stages, so it will enter into force in about a month’s time. It’s just missing signatures and publication, and then it will take 20 days. So about a month from now it will enter into force, and then the first provisions will become applicable six months after this. Those provisions concern the prohibitions. And then the rules for general purpose AI models, so large language models, generative AI, those will then become applicable after 12 months. And then the so-called high-risk systems, the rules will become applicable either after 24 or 36 months. This is to give providers time to adapt, to prepare for the applicability of this new legislation. It’s a risk-based piece of legislation, so it intervenes where necessary. It does not apply, it doesn’t regulate technology, but it regulates certain uses of technology, so the context is important, the use is important. The implementation itself is based on a strong pre- and post-market system of enforcement and supervision. So it’s a decentralized system of national… national notified bodies checking compliance with the AI Act requirements before high-risk systems can be used, can be placed on the EU market. So these are the member states’ national notified bodies, and they have then market surveillance authorities ensuring the post-market monitoring. So there is the pre-market checks and then, of course, the post-market monitoring. So this system of implementation is based on a well-established and functioning system that we have in place in the European Union on product safety. So it applies similar principles. I would like to highlight the importance of one body here, which is the European AI Office. We have set up the European AI Office, and that AI Office will then coordinate the work of the national notified bodies and national authorities which are involved in this. And so it has the coordination and monitoring tasks, to some extent, to ensure uniform application, uniform implementation. But it will also have a special role in the supervision of general-purpose AI models. So the AI Office will have special powers in this regard. For example, it can do evaluations, it can request measures, particularly on general-purpose AI models which carry a systemic risk. And they have those kinds of models, providers of such models have certain additional obligations rather than just transparency. So the AI Office has roles which go beyond safety institutes which have been set up in some countries, because it has a much broader scope, because the AI Act deals not just with safety, but also the protection of health and fundamental rights. So it has quite a different profile, but it includes the safety aspect. So any cooperation that we have with safety institutes elsewhere that are being set up or have been set up, it includes the safety aspect. have been set up, this will be with the AI office. So it will have an important role in the implementation. I should also add that it will also deal with research, innovation, and deployment aspects, and also the international engagement. So it has a very broad, comprehensive set of roles. Indicators of measures. I should maybe add that we also have a couple of other bodies, scientific panel, which will support the implementation and advisory forum with stakeholders and member states’ representatives in an AI board. Indicators, well, there could be technical, technocratic indicators of its success. For example, the number of systems that undergo conformity assessment and get this so-called CE marking, which then enables them to be put on the market, how many are registered in the database, et cetera. However, in a way, the most important indicator is something that is hard to measure, because we think that this will increase trust in AI systems. So the AI Act and its provisions and the safeguards it provides, the guardrails it provides, will increase trust in AI systems. Why is trust so important? Trust is important because trust is the sine qua non for uptake. Uptake is sine qua non for benefits to materialize. So we need trust to have uptake, and we need uptake to have benefits so that we can actually enjoy this technology and the potential it has and the positive aspect it has. Thank you.

Ebtesam Almazrouei:
Thank you, Juha. While we all agree on the importance of measuring the effectiveness of AI Act, also we would like to hear more from Alan, the Assistant Secretary from the United States of America about the executive order. And mainly that the United States had already adopted a voluntary commitment approach by the private sector and already issued an executive order. What are the next steps? or the steps for the United States and how you can domestically and internationally measure the success of them.

Alan Davidson:
Well, thank you, Dr. El-Masri. And a quick thank you and congratulations to the ITU and to Secretary General Doreen Bogdan-Martin for convening all of us and for hosting a very successful day already. And for all of you for joining us to discuss how we can leverage AI to help achieve our common collective goals and the sustainable development goals. You know, the starting point for us has been that responsible AI innovation, emphasis on responsible, can bring enormous benefits to people. It’s going to transform every corner of our economy. But we will only realize the promise of AI, as others have said, if we also address the serious risks that it raises today. And we’ve heard a lot about those. Those include concerns about safety, security, privacy, discrimination, bias, risks around disinformation, as we heard so eloquently on the previous panel. We also face a risk of exacerbating the inequities that exist already in our world if we don’t ensure that these advances are available for everyone. To talk about how we’ve been approaching this. Domestically, the entire US government has really moved with urgency, I would say, to seize on this tremendous promise and potential risk of this moment. As you noted, last summer, President Biden secured voluntary commitments, that was our starting point, from the leading AI companies to help make sure that AI systems are safe before they’re released. And these commitments helped us, and the world, I hope, get ahead of the rapid pace of development that we started to see in these frontier models. Developer commitments were just the first step. The U.S. government has issued an AI executive order last fall, which is, we think, one of the most significant government actions to date on AI safety, security, and trust, and it brings the full capabilities of the U.S. government to bear in promoting innovation and trust in AI. It also lays out a very broad work program, from research to tooling to policy, to address the risks of AI and to use the authorities that already exist in law to bring to bear to these issues. Just as an example of a few of the big initiatives folks have been aware of, going forward we’re standing up, and have stood up, the U.S. AI Safety Institute to do the technical work around security science required to address the full spectrum of AI-related risks. And just last week, the Secretary of Commerce, we released a vision paper for that safety institute. We’re also pursuing a broad range of other initiatives. One that I’m particularly interested in going forward is, and that we’re leading going forward is, on the question of open model weights and the question of the openness of frontier models and dual-use foundation models. That domestic work, and it’s far-ranging, I think gives us, going forward, also a sound basis for our international approach. You know, as we’ve noted, the international community has been working on AI governance and principles for years, and that is something we should build upon. Much of our work is around thinking about how we can leverage the tremendous potential of AI for our collective goals. And just as an example, I’ll talk about the Sustainable Development Goals, which as many of you know, we’re all on track to achieve just 12% of our goals in that space. space. In these benchmarks, we’ve plateaued on many of them. On some of them, we’re actually regressing. But studies suggest that AI could accelerate progress on 80 percent of the SGGs, in part by automating the work, improving decision-making. AI can help map soil, can yield better crops. It can help us predict earthquakes, as we’ve seen in studies. All of these are the kinds of things that, optimistically, we should be harnessing and making sure that these tools are available widely to everyone. We’re working to build global momentum around that idea of harnessing AI for good. In March, the U.S. led the passage of the first-ever standalone resolution on AI in the U.N. General Assembly. And we believe that gives us a framework for leveraging AI for economic and social progress, while respecting human rights and leaving no one behind. Last week’s summit in Seoul of the AI Safety Institutes, congratulations to our colleagues from Korea, was another important building block in multi-stakeholder collaboration. And the list goes on. There are many initiatives that are underway. I think what you’re hearing is a sense of urgency from government to address the issues of the moment and to realize that if we work together, and we are committed to working together, we can capitalize on this energy, capitalize on this moment of public attention, and ensure that the AI revolution is a revolution that is working for everyone. Thank you.

Ebtesam Almazrouei:
Thank you, Alan. I can’t agree more. AI for good should be our canvas. And how we can harness the power of AI across all the 17 SDGs is a crucial step that we all should agree. as governments, industry leaders, NGOs, academic institutes, and how we can foster our global collaboration and cooperation toward achieving the AI for good goals. Now, moving to China. Your Excellency, Mr. Sean John Kitty, and I apologize if I pronounce your name maybe incorrectly. You can correct me if it’s not the right way. China is already committed to the concept of AI ethics first and AI for good. And could you share more practical experience of AI governance in China and what exactly you were doing and what are the regulation and AI frameworks that you already put into practice?

SHAN Zhongde:
Thank you very much. It’s a very important initiative worldwide. And we are going to promote development. So this event that we will hold will be very important. Last year, we promoted the initiative AI for good. We proposed several methods. and we will base ourselves on a human-centered approach so that AI will work for good. We are currently working on a consensus so that governance is at the center. We are going from putting these words into reality. We are putting these words into practice. In China, we would like to explain how we reflect the practices of having a human-centered approach and how AI can be used for good. Firstly, we would like to insist on this theory. We want to ensure that we prevent risks, that we analyze risks so that we have many warning systems, we are creating policies to do so, we are working with sector players to establish initiatives for open, transparent work, and we are focusing on open data. Some companies are already working on this as a priority. The practice of having very strong theories is very important in China. Secondly, we have a number of very positive examples. In China, we are drafting and creating different strategies and methods, and through research, we are progressing. We are working on different algorithms, and in that way, we are engaging in in-depth work with AI to adopt a number of policies and regulations. We are also testing technology, and are dividing products in different categories – finance, health, and transport systems. And AI is used in all these fields. We have specific standards for this technology. In April, China adopted a first set of projects for AI. We have principles and regulations with the aim of avoiding risk and to compensate risk. We are also working to ensure a fair and fair trade. We wish to have a human-centered approach. We wish to avoid discrimination of users, and for this we have created different sectors and standardized these different sectors. We have data notification systems that allow us to have a multi-stakeholder and multi-sectoral assessment. We have also participated in the action plan on well-being thanks to AI. We are working actively in this field, and we are working with ASEAN actively on a global level. To conclude, I’d like to say that regarding 1G and 2G, we have adopted a number of regulations, and we are focusing on technical capacities to increase and enhance our working methods, our innovative technologies. has allowed us to create a stamp in this field. We have also worked on a public platform for modern technologies. And we insist that AI should be for good. We use digital content to create simulations. And we would like to insist on the fact that AI should be for good. Thank you.

Ebtesam Almazrouei:
Thank you, Your Excellency, for speaking about the examples that you provided and you already implemented in China about the best practices and implementation of AI, especially for good. Now, moving to Your Excellency, Mr. Hiroshi Yoshida. As the chair of Group 20 in 2019, and with the recent launch of the Hiroshima AI process at the G7 in 2023, Japan is actively working on international AI governance and collaboration. What outcomes do you anticipate from these efforts? And if you can provide us with any updates about these collaboration efforts. Please.

Hiroshi Yoshida:
Thank you. First of all, I would like to congratulate Thomas for adoption of Framework Convention of European Council. And yes, as already discussed in this panel and the previous one, so we know that there are many risks in AI. So many aspects are pointed out, bias in learning data, disinformation, cyber. and Bangladesh Minister pointed out at the previous session, the deepfake in disaster. And also in our country, we experienced deepfake in disaster caused by typhoon and others. And also deepfaking election, this deepfaking election is pointed out recently. But on the other hand, we all know that AI has a big potential. And so that what is important is because of the… so hesitate to get best use of AI because of the risk, but instead we should mitigate those risks and get best use of AI. And so that’s why we started a discussion on AI in international forum. So the first one we started in OECD under G20 and in 2019 Osaka Summit of G20. So we agreed on G20 AI principle. And it is a kind of a common understanding what we should think about building a policy regarding AI. Of course, in these four years, everything had changed and the generative AI had came up last two years. So in 2022, and we launched to cooperate it, we launched the Hiroshima AI process. And the concept of the Hiroshima AI process is that the… so we need some kind of a governance framework, but it should be interoperable. And interoperable means not every country has to take the same action. So what should be, so what action should be taken is a very same level, but how to implement it is a kind of, it’s up to each country. So that we know that AI act is adopted in Europe, but on the other hand we have another approach. Of course voluntary commitment can be an option, but what action should be done needs to be interoperable. So for example, so in the outcome of the Hiroshima AI process, there is, so it asks for evaluating risks in advance to putting those products into the market. And those actions to be taken should be interoperable, and those AI developers, AI service providers or AI users know what to do. And so that we discussed last year, and December last year, we agreed on comprehensive policy framework which includes guiding principle for all AI actors, that it’s not only for AI developers, including AI service providers, AI users. And also code of conduct for AI developers. And so this year we are continuing discussing how to implement it, and so kind of a monitoring mechanism is discussed. and the Italian presidency. So also, we launched the Hiroshima AI Process Friends Group beginning of this month, and so now 50 countries and regions have joined the Hiroshima AI Process Friends Group. And also, in our country, we also established AI Safety Institute this February, and the important thing here is also interoperability. And we are now working on, in that AI Institute, how to assess the AI safety, and so we cannot do only in our AI Safety Institute, AI Safety Institute, so the AI Safety Institute should be coordinated between other AI Institute in other countries, and then such assessment also should be having interoperability. Thank you very much.

Ebtesam Almazrouei:
Thank you, your excellency. Thank you, your excellency. Maybe one of the most thing that I noticed from the morning discussions and also this afternoon discussion is most of the countries, they started to have their own AI Safety Institute, such as in Korea and Japan and also in USA. And I would like also to emphasize that the role and the important steps that each country, they have to set their own institute or sandbox to test the best AI frameworks that can be embedded and put into place for their societies and for their governmental work. Now, going to Korea. Your Excellency, Dohyon, are the goals set during the UK Safety Summit that has been held in November 2023, and also I contribute with my colleagues from TIC AI leaders, government representative, and we discussed many themes. Last week, you already hosted the second AI Safety Summit in Korea. What has been implemented so far still? Can you see that the goals that has been set by the UK Safety Summit still valid, or maybe there is different approach that has been taken by Korean government?

Dohyun Kang:
Thank you very much for introducing me, and thank you again, the Secretary General of the ITU, and under the leadership of the wonderful organization conference. I am also glad to introduce the Seoul AI Summit and the result of the summit. If I have got the questions about the Blitzkrieg Summit is still valid, then of course it’s valid. The Seoul Summit is the second version, which is developed for the six months since the Blitzkrieg. So the developed version’s point are several things. The one is the value of the AI and the topic of dealing with in the summit is diverse. The one is safety. The second is innovation. The third one that already several guests discussed. discuss inclusivity, so. Also the second one is the more detailed things, it’s almost the action plan with respect to the AI safety. In the Seoul Summit, we strongly recommended the networking between the AI Safety Institute and also the emphasizing the global cooperation between the AI Safety Institute. It’s a more detailed thing, so. The third one could be, how can I say, it’s related to the government action, so. So for example, the AI Arctic in Korean case, at first time, as long as I remember, at the 2019, firstly, OECD announced the AI Arctic principle, and the next year, the G20 also announced the AI Arctic principle. After that, the Korean government also established our standard, our Arctic principle, and then after that, we detailed, we established a guide, guideline, and the checkpoint for the developer and the operator, each companies. Seoul AI Summit composed three parts. One is the leader’s section, second is the minister’s section, the third one is the global forum, so. At the leader’s section, the leader, president, and the prime minister, and the secretary-general also adopted Seoul declarations. At the Seoul declarations, we included the… emphasizing the importance of the testing and the measure of AI safety. And the second one is addressing all various kind of the side effect of AI. And then the third one, we recognize that the core relationship, much more enhanced, international cooperation between each others, focusing on inclusivity, we also have to contribute to another, how can I say, digital south or developing south or something. It is called, could be called the AI south. So we also have to act to the more detailed things to solve this problem. So the AI Safety Summit, including all of the things at the Seoul declarations. Also, the next day, the global forum held it and the ministerial meeting held it. At the time, the ministerial meeting statement, including all of the things, the democratic environment and all the activities and the culture and the human brain, all of the side effect of the AI. And especially, it included the low-power semiconductor is also included there. Okay. That’s the quick summary of the Seoul AI Summit. And thank you for the congratulation comment on our summit. Next, we held it at French. And then from now, the United Kingdom government and the Korean government also will be discussing about the topic of the next ones. Also, more detailed things will be scheduled. Thanks again for our step of a colleague of the United Kingdom, so he, the United Kingdom, and one of the best of my partners, our partners to the co-organizer of this summit. Thank you very much.

Ebtesam Almazrouei:
Thank you, Your Excellency. I think now we will go to the next part of our session, and we heard from everyone from the United States about the executive order and what they are doing in terms of AI governance and implementation to Japan, Korea, China, the European and AI Act. Now that you already heard your colleagues and your peers and what each government and their countries are trying to do, what is one key element you would like to incorporate from their practices to your country or region approach towards AI governance and framework in your country?

Hiroshi Yoshida:
Yes, so, as I said in the previous remark, operability is a very important factor for AI governance, and we want to know what other countries are doing, and so that to know other policies, it would be effective for us to discuss AI policy, and we want to share more information in this policy and have a multi-stakeholder discussion, of course not only the government can develop AI governance framework, we need multi-stakeholder. call the discussion and know what other countries are doing. Thank you very much.

Thomas Schneider:
Thank you. Thank you. It is interesting and good to hear that all governments seem to want the same thing. They want to protect rights, they want to put people in the center, they want to allow innovation. But then, if you look at the tools, there are many ways to roam, as the Italians say. So, some go, give reference to voluntary commitments or regulations, others go for a horizontal law that tries to fulfill at least some of the purposes, others have other incentives. So, I think we should cooperate all together, and not just governments, but all stakeholders, of course, to develop a global governance and cooperation framework that allows us to do the same in different ways that are reflecting our situations, our cultures, our needs. And I think this is what is needed. At the same time, if we are honest, we also need tools that empower people, that create transparency and accountability, so that people can react in case governments or companies do not do what they say they do, i.e. support people, protect them, and so on. So, if a government or a company does damage to its owner or other people, then we should have means to actually stand up and create incentives that this is not happening, or this is avoided or minimized. And therefore, the Council of Europe Convention is one tool that unites all those that care about human rights, democracy, and rule of law, by agreeing on the same values, but offering adaptable, agile, dynamic ways fit for every country to sign up to it, like the Cybercrime Convention did, where more than 100 countries are cooperating in a modular way, on substance, with additional protocols, but also in different levels of participation. You sign and ratify, you just cooperate, so this is one of the big contributions. But I think on a global governance level, and this is what I want to end, fortunately, we do not need to start from scratch. We have many actors that already perform important functions, like the ITU in their fields, like UNESCO, like the standard institutions, and so on. We just need to make them cooperate better, more coherent, and identify concrete gaps where maybe additional functions or structures would be needed, so we can actually start not from zero, but from a reasonable amount of activities that we already have.

Ebtesam Almazrouei:
Thank you, Thomas.

SHAN Zhongde:
FOREIGN MINISTER WANG YIQIUAN I’m very glad to hear from everyone about their experiences, and also on the AI governance in the past few days. I have, for many times, heard the speeches from the ITU Secretariat, and I have the following thoughts. First, we should continue to strengthen the norms and standards, and we should promote the formation of a global framework. And this is what we should do first. Secondly, we should strengthen the international exchange, and we should make use of the generative AI technology, so the sharing of the experiences is very important. Thirdly, we should strengthen and deepen the cooperation, and jointly we should enhance the safety, reliability, controllability, and the fairness of the AI. And so, to make AI empowerment transition, and to jointly promote the AI for the benefit of mankind, for the benefit of SDG, I would like to take this opportunity. to extend an invitation to all of you to attend the World Conference on AI to be held in Shanghai in the coming July. And in particular, at this conference, the ITU, my ministry, the Shanghai Municipal Government will co-host a forum entitled AI for Good. I’m ready to meet with everyone in Shanghai to jointly push up for the AI for good and for the benefit of the people all over the world. Thank you.

Ebtesam Almazrouei:
Thank you, Your Excellency. Indeed, we will look forward to joining you in Shanghai to foster our collaboration and to discuss how we can harness the power of AI across all the 17 stages. Going back to our main second round discussion, how you can benefit from your peers’ experience in terms of regulating AI frameworks, and what are the best practices that you want to put in implementation in your government and in your country?

Dohyun Kang:
Yes. Actually, the Korean government wants to contribute to all of the AI governance, but the most specific things that we want to do is, the first one is the more specific AI standard, AI safety standard. There are four national AI safety institutes up to now, the United States, Canada, and the United Kingdom, and Japan. Also, our government will open our AI safety institute in Japan. in the end of the years. Also, the five AI safety institutes with how to do the testing, the private sector or the public sector. Also, we want to, in short term, so we want to focus on that one. The second one is the long-term, the long-term approaches. It is related to the inclusivity. So, the Korean government got a higher reputation in terms of the inclusivity. We worked for 30 years, almost 30 years, so we know what is good, what is bad, we are what, we are failed. Sometimes, we got the success. It’s the know-how in the policy makers. Also, we understanding that the situation of the other countries. As you know, the Korean is very unique situations. We want to go to the global markets. We want to be one of the best nations in terms of the AIs. Also, we want to contribute the inclusivity with the other countries. Also, we know the value of the inclusivity. The next one is we will launch another big project with ASEAN. It is the big project after that I met another country’s ministry. So, that is our actions and that I want to. reflect your opinion to our policies. So we’ll try to get much more steps and we will much more get the more practical things to contribute to this discussion. So that’s our strategy of the Korean government. Thank you very much

Ebtesam Almazrouei:
Thank you, Your Excellency. We are short of time. I really appreciate if everyone can stick to one minute

Alan Davidson:
I will be very quick. I will say, first of all, there’s so many good ideas. We have tried to incorporate many of them. Part of our approach, as you heard, voluntary commitments, a very comprehensive executive order, tooling, research, ultimately governance activities, and with the immediate effect has been our approach. Probably one thing that I see that we have, that others have spoken about, is legislation. And one thing that we have not pursued yet, but that the President has called for, is bipartisan legislation in the U.S. to further harness the power of AI while keeping Americans safe. Congress will determine the exact approach for us, but the President has been clear that he wants to see legislation that incorporates principles of trust and safety in legislation that gives us also the tools we need to make sure we’re regulating properly. I will say it is early days for this conversation. We have a lot to learn. I do feel like together we can move forward on these big issues around AI with urgency and with wisdom. That is our goal. Thank you. Thank you so much.

Juha Heikkila:
So, two things emerge from this discussion. First, there is a lot of attention now on AI, so we have to seize the moment and make AI work for good. And I think this is something that we share here, and I think this is something that we support, and this is certainly something that we can support. How we go about it in cooperation, of course, we see where we can converge and work on that. And then, as was mentioned by the Japanese Vice Minister, the implementation details then are different. So, different countries, different jurisdictions take different decisions. We in the European Union have decided to legislate. We have hard law. others and there are other countries which have followed suit and are introducing or preparing legislation, yet others have decided to rely on other types of measures. So this is, I think, an important point. Each jurisdiction will choose whatever suits them best and what they feel is the most appropriate way of doing it, but the compatibility of these different governance approaches and the international approach, of course, is something that would be important. This was mentioned there. And this is also what we want to work towards and I already mentioned that the international aspects are part of our EU AI strategies. One key cornerstone, as the President of the European Commission stated in her State of the Union speech in September, we work to guide innovation, to set guardrails on AI and to work together with other jurisdictions on guardrails, on international governance, on AI. Thank you.

Ebtesam Almazrouei:
Thank you, Juha. Well, we are concluding the second session. What has been discussed here is how we can all collaborate and the importance of collaboration and cooperation to foster the development of inclusive AI governance for AI for good. Thank you for participating in this panel. I hope that your colleagues here also, they can take the remarks and every country start to build based on your best practices and best experience. Thank you. Thank you.

AD

Alan Davidson

Speech speed

161 words per minute

Speech length

1089 words

Speech time

407 secs

DK

Dohyun Kang

Speech speed

123 words per minute

Speech length

996 words

Speech time

486 secs

EA

Ebtesam Almazrouei

Speech speed

136 words per minute

Speech length

1149 words

Speech time

506 secs

HY

Hiroshi Yoshida

Speech speed

107 words per minute

Speech length

716 words

Speech time

400 secs

I

Introduction

Speech speed

116 words per minute

Speech length

244 words

Speech time

126 secs

JH

Juha Heikkila

Speech speed

165 words per minute

Speech length

1184 words

Speech time

430 secs

SZ

SHAN Zhongde

Speech speed

95 words per minute

Speech length

766 words

Speech time

481 secs

TS

Thomas Schneider

Speech speed

187 words per minute

Speech length

1133 words

Speech time

364 secs