Open Forum #26 High-level review of AI governance from Inter-governmental P
Open Forum #26 High-level review of AI governance from Inter-governmental P
Session at a Glance
Summary
This discussion focused on global AI governance from various perspectives, including government, industry, civil society, and youth. Participants explored the current status, challenges, and priorities in AI governance across different regions.
Key themes included balancing innovation with security and equality, addressing the needs of emerging economies, and ensuring cultural and linguistic diversity in AI development. The importance of open-source AI and its implications for economic development and flexibility were highlighted. Challenges such as data privacy, copyright issues, and the governance of both advanced and traditional AI applications were discussed.
Participants emphasized the need for a harmonized global approach to AI governance, recognizing the geopolitical and economic competition surrounding AI development. The discussion touched on the importance of infrastructure, particularly in Africa, where limited data centers and skills gaps pose significant challenges.
The role of youth in AI development was highlighted, along with concerns about data ownership and localization. The need for inclusive governance frameworks that involve multiple stakeholders, including youth, was stressed. Participants also discussed the importance of enforcement mechanisms and the potential for tax incentives to encourage compliance with AI governance policies.
The discussion concluded with a call for collaborative efforts in AI governance, emphasizing the need for transparency, partnerships between public and private sectors, and the implementation of voluntary reporting frameworks to inform policy decisions. Overall, the participants agreed on the necessity of a unified, inclusive approach to AI governance to ensure its responsible and beneficial development globally.
Keypoints
Major discussion points:
– Current challenges and risks of AI, including security, bias, diversity, and environmental impact
– The need for inclusive, global AI governance frameworks and standards
– Data and infrastructure gaps between developed and developing regions, particularly Africa
– Balancing innovation with regulation and safety considerations
– The roles and responsibilities of different stakeholders in AI development and governance
The overall purpose of the discussion was to explore different perspectives on the current state of AI governance from government, industry, civil society and youth representatives. The goal was to identify key challenges, priorities and potential paths forward for developing effective global AI governance.
The tone of the discussion was largely collaborative and solution-oriented. Speakers acknowledged both the opportunities and risks of AI, and emphasized the need for cooperation across sectors and regions. There was a sense of urgency about addressing governance gaps, but also optimism about the potential for AI to drive progress if managed responsibly. The tone became slightly more critical when discussing inequalities in AI development and data ownership between Global North and South.
Speakers
– Yoichi Iida: Assistant Vice Minister at the Japanese Ministry of Internal Affairs and Foreign Relations, moderator of the session
– Audrey Plonk: Deputy Director of Science and Technology and Innovation Directorate of OECD
– Thelma Quaye: Director of Infrastructure Skills and Empowerment from Smart Africa
– Leydon Shantseko: Representative from Zambia Youth IGN
– Henri Verdier: French Ambassador
Additional speakers:
– Melinda Claybaugh: Policy and privacy policy director from META
– Levi: Youth representative (full name not provided)
Full session report
AI Governance: A Global Perspective
This comprehensive discussion on global AI governance brought together diverse perspectives from government, industry, civil society, and youth representatives. The session, moderated by Yoichi Iida from the Japanese Ministry of Internal Affairs and Foreign Relations, explored the current status, challenges, and priorities in AI governance across different regions.
Current State and Challenges of AI Governance
The discussion began with a thought-provoking comment from Henri Verdier, the French Ambassador, who questioned whether the AI revolution would truly represent progress for humankind. This framing set the tone for considering the broader implications and ethics of AI development, beyond mere technological advancement.
Speakers highlighted several key challenges in the current AI landscape:
1. Balancing Innovation and Security: Governments face the task of fostering innovation while addressing potential risks and security concerns.
2. Infrastructure and Skills Gap: Thelma Quaye, representing Smart Africa, a pan-African organization, emphasised the lack of necessary infrastructure and skills in Africa to fully leverage AI. She likened AI to water, highlighting its potential to nourish and help societies grow, but also underscoring the need for proper governance.
3. Data Sovereignty and Localisation: Leydon Shantseko, a youth representative from Zambia, raised concerns about data sovereignty and localisation issues, particularly in Africa. He pointed out the practical challenges of implementing data localisation policies when most platforms used are not hosted in Africa.
4. Geopolitical Competition: Henri Verdier highlighted the geopolitical aspects of AI development, framing it as a source of power and intense competition between companies, countries, and international organisations.
5. Open Source AI: The discussion touched on the role of open source AI models in promoting innovation and economic development, while also presenting challenges for governance. Melinda Claybaugh, a policy director from META, elaborated on their open-source AI approach and its implications.
Priorities for AI Governance Frameworks
The speakers agreed on the need for a holistic, international approach to AI governance, but differed in their specific focuses:
1. Inclusive Approach: There was consensus on the importance of multi-stakeholder cooperation in developing AI governance frameworks. This includes involving youth throughout the process, as emphasised by Leydon Shantseko.
2. Reflecting AI Ecosystem Realities: Melinda Claybaugh stressed that governance should reflect the realities of the AI value chain and ecosystem, particularly for open source AI.
3. Building Local Capabilities: Thelma Quaye highlighted the importance of building local African datasets and AI capabilities to ensure relevance and reduce bias.
4. Interoperable Governance Tools: Audrey Plonk from the OECD emphasised the development of interoperable governance tools and reporting frameworks.
5. Balancing Global and Local Needs: The discussion highlighted the need to balance global standards with local needs and perspectives, particularly in developing regions.
6. Enforcement Mechanisms: Thelma Quaye stressed the importance of developing effective enforcement mechanisms for AI governance policies, especially in regions like Africa.
Roles and Responsibilities in AI Development
The speakers outlined various roles and responsibilities for different stakeholders:
1. Governments: Responsible for balancing innovation and security, and creating appropriate regulatory frameworks.
2. Companies: Should be transparent about AI development and associated risks.
3. International Organisations: The OECD is working to provide data and harmonisation across AI approaches, including through its AI observatory and integration of the Global Partnership on AI.
4. Youth: Should be involved in policy-making and allowed to innovate while addressing potential risks.
5. African Nations: Need to increase data infrastructure and sovereignty.
6. Public-Private Partnerships: Collaboration between public and private sectors is crucial for advancing common AI goals, such as developing representative global datasets.
Proposed Solutions and Action Items
Several concrete actions and proposals emerged from the discussion:
1. The OECD is finalising a reporting framework to implement the Hiroshima AI Code of Conduct, aiming to provide interoperability and harmonization across different AI approaches.
2. France is organising an international AI summit in November 2023 to discuss global AI governance, as announced by Ambassador Verdier.
3. Efforts are being made to increase the interoperability of AI governance tools and frameworks across different initiatives.
4. Proposals for using AI and virtual reality to scale up technical and vocational education and training (TVET) skills in Africa.
5. Suggestions for developing partnerships between public and private sectors to advance common AI goals, such as developing representative global datasets.
Unresolved Issues and Future Considerations
Despite the productive discussion, several issues remained unresolved:
1. Effectively balancing innovation with regulation and risk mitigation.
2. Addressing data localisation and sovereignty concerns, particularly for developing regions.
3. Ensuring fair representation and inclusion of diverse perspectives in AI governance discussions.
4. Developing effective enforcement mechanisms for AI governance policies, especially in regions like Africa.
5. Increasing AI adoption across industries, particularly in smaller companies, as the current diffusion rate remains low.
6. Bridging the growing gap between public and private AI research capabilities, including the need for public research to reproduce private sector AI development results.
Conclusion
The discussion highlighted the complex and multifaceted nature of global AI governance. While there was general agreement on the need for comprehensive, inclusive approaches, the speakers’ diverse perspectives underscored the challenges in developing universally applicable frameworks. The conversation emphasised the importance of considering ethical, geopolitical, and developmental aspects alongside technical considerations in shaping the future of AI governance.
As AI continues to evolve rapidly, ongoing dialogue and collaboration between stakeholders will be crucial in addressing emerging challenges and ensuring that AI development truly represents progress for humankind. The upcoming international AI summit in France and the OECD’s work on reporting frameworks represent important steps towards more coordinated global efforts in AI governance. The inclusion of youth perspectives and the focus on addressing regional disparities, particularly in Africa, highlight the importance of a truly global and inclusive approach to AI governance.
Session Transcript
Yoichi Iida: AI Governance. So, my name is Ochi Iida, the Assistant Vice Minister at the Japanese Ministry of Internal Affairs and Foreign Relations. I’m the moderator of this session. I’ll talk about the global AI governance from different perspectives and from different communities such as government, industry, and civil society, including Europe. So, we’re now hearing a little bit of technicals. So, first of all, I’m going to be talking about AI from government of France. She’s a policy, privacy policy director from NATO. And we have one of the speakers from the International Organization, OECD, online, and Audrey Plonk, Deputy Director of Science and Technology and Innovation Directorate of OECD. From civil society, we have two speakers. One is Thelma Nguyen, if I pronounce correctly. She’s Director of Infrastructure Skills and Empowerment from Smart Africa. And also, we have one more speaker from civil, in particular youth community, Mr. Leydon Shantseko. He’s representative from Zambia Youth IGN. So, thank you very much to all of you for joining us. And we will have a very productive session. So, in the beginning, I’d like to invite all five speakers to speak about your views on the general current status and also challenges in AI governance, probably in your domestic AI governance situation, or probably you can talk about the global AI governance situation. And you can also tap on your priorities, what you are most expecting from AI, and what you are doing now. So, I’d like to start with Ambassador Andy Beaudoin.
Andy Beaudoin: Good afternoon, everyone. So, you did ask us a very important question, and you did ask us privately to do it in four minutes, which is impossible. The main challenge is quite simple to say, not to do. Are we sure that this impressive revolution will be a progress? Not just innovation, not just power, but a progress for humankind. And that’s probably the main responsibility of governments, to be sure that we balance innovation and security, economic growth and equality and efficiency and diversity, and to find a good balance. And that’s why we are part of, of course, a global movement with important companies, multi-stakeholder community, but probably as governments, we have at least a responsibility in front of our citizens to pay attention to this. And as diplomats, I am a diplomat, we have a responsibility to do it within an international framework and in conversation with the important multi-stakeholder community. And if we start with the idea of progress, and then I will pass the mic to other speakers, I just want to emphasize that since the beginning of the story of AI, we have had different conversations about, so last year, for example, the main topic was existential risk. Now we are speaking more about, for example, equal development and are we addressing the needs of emerging economies? I think that the most important thing is to start by recognizing that we will have to face a lot of challenges and to try to have a broad vision of the challenges. So security matters. Security is not just AI becoming crazy and attacking humanity. Security is also cyber security. Security is also bias. Are we sure that we did train the model with good data and that we are not just reproducing current inequalities? And security is not everything. We also need to think about cultural and linguistic diversity. I believe that if you don’t have a large language model for your language, your language will disappear as a working language, so as an economic language. So we need to be sure that everyone will have the possibility to enter in this revolution. Diversity doesn’t mean just linguistic diversity, because you need also to train the model with the knowledge of different cultures and to be sure that your point of view, your history, your perception of the world will be taken into account. We are soon to face the question of environmental impact of intellectual property. Are we sure that we still know how to repay creators and creations and build a good framework? Maybe you will tell about this. Maybe we have to find new concepts to protect privacy, because maybe just to protect my personal data is not enough to protect my privacy. And we can continue. Maybe we need a specific policy to be sure that we train enough skills and competencies in emerging economies and they will take a seat in the driver’s seat, not just as consumers. Maybe we have to rethink about education and not just to train engineers, but to be sure that the future citizens will be ready for this new world and there will be free minds and free citizens in the new world. I could continue. I won’t. But the idea is that the holistic approach and the perception of the global question we have to face is, from my perspective, very important. Thank you.
Yoichi Iida: Thank you very much, Ambassador. You talked about a lot of various risks and challenges. In particular, you talked about the security or diversity. Diversity will be very important when AI continues to develop. And also, we need to recognize the importance of the project in the next generation. A lot of risks and challenges were talked about, but at the same time, we also recognize the importance of innovation. So, what about the industry perspective? I would like to now invite Melinda to share your view. Yes, thank you so much. Can everyone hear me? Yes. Okay, great. So, just a bit of context to set the stage for my perspective from META. Two things. One is that in the AI space, we are very much an open AI company. Not open AI, but an open source AI company. And what that means is that we are all in on providing our AI technology on an open source basis. So, our large language model, Lama, we have different versions of it, but it’s made available to anyone to download for free. And this, in our view, is the best way forward in terms of approaching AI innovation for a few reasons. One being that for developers, it is the most valuable and flexible option for them to build on and be able to customize applications to their local needs, fine-tune the way they would like to with the data that they want to. It also is the best, we think, from an economic development perspective. Being able to provide a really diverse ecosystem of AI tools to developers and to countries is going to have the most benefits from an economic perspective because it won’t be locked into a few companies that are providing closed models. And then finally, it benefits us as a company because we won’t be beholden to other operating systems, so to speak, that people will be building on our technology, which is a benefit to us. And we also come at it from the perspective of a company that has supported and signed on to various AI global frameworks in the last few years. last year, including the sole AI, Frontier AI commitments, which will require us to publish an AI safety framework before the AI summit in France next year as an early adopter also of, really supportive of the G7 code of conduct. And so that’s our perspective. And what I’ve seen happening, I think in the AI governance landscape, there are some positives and some challenges. I think the positives are that we’ve seen a real harmonization in the AI safety conversation at the global level. So there’s a increased understanding of the safety risks. There’s an increased understanding of the steps that we need to take to mitigate those risks. And more importantly, I think a firm understanding that we need to have a harmonized global approach to this global technology. I think some of the challenges, however, that we’re seeing are that there’s a lot of conversations happening that are not necessarily relating to each other. So while we have international agreement on the safety conversation, as the ambassador pointed out, there are other conversations happening. So there’s the data conversation around data privacy and the use of data. There’s the copyright conversation that’s happening. There’s the conversation around the governance of all AI, not just advanced AI, but kind of our classic AI, and how do we guard against the risks and harms from regular AI when it’s used to make decisions that affect people’s lives. And so I think those are some, of course, there’s the industry, there’s a lot of industry standards being developed that are important in different ways. And then there’s the conversation around the AI safety institutes, which I think in a positive development are being stood up around the world that will help with the science of AI and the evaluations and benchmarks that should be looked to for AI governance. So I think the question is going to be how to tie a lot of these things together as they deepen, as the science deepens, as the industry standards deepen, as the global frameworks deepen, how do we connect these pieces to make sure that they talk to each other? And then finally, the point I wanna make about one of our priorities that we have in terms of the AI governance conversation is how do we reflect in our governance frameworks the realities of the AI value chain, and particularly open source AI? So what do I mean by that? Well, so what we need to do is reflect the different roles that the actors in the AI value chain and ecosystem have to play, and those are unique and different roles. So what the model developer has within its responsibility in terms of safety mitigation, risk mitigation, then what role does the deployer of the model play? And then the downstream developers, all of these players have unique roles and responsibilities. And I think as we look at a comprehensive kind of governance framework for the ecosystem, we need to take that into account. So speaking from the open source perspective, we don’t have the control and visibility into the downstream uses of the model that a closed model provider might, simply because anyone can use our model for any purpose. In that case, then, what are the responsibilities of the developers who are developing the applications for very specific use cases? And so I think we need to bring that complexity to the conversation to make sure that we’re addressing, using the right tools in the toolbox to address the harms that might arise. Thank you. Thank you very much, Maylene, to cover it. A lot of things actually, you know, in divergence or maybe in parallel things are going on as the international discussions on governance frameworks and whatever, risk assessment. And this, I thought, will be the second topic in our discussion. But before that, I would like to, to invite the other speakers to share your overviews on the general fundamental understanding on the current situation. The previous two speakers covered a lot of elements like risk and challenges and also the opportunities and diversity and inclusiveness will be also a very important part. And now I’d like to invite Thelma from some African view when we talk about AI governance, what we need to prioritize and how you regard the current situation.
Thelma Quaye: Thank you very much. Good evening, everybody. So I’d like to clarify, Smart Africa is not a multinational organization, basically, we are also a pan-African organization working across Africa, supporting governments together with private sector in the digital transformation. Now, my perspective of AI from the African context, I would add, I’m liking AI to water. Water sort of nourishes us, it helps us, it helps us to grow our crops. In the same way, AI also helps us to be more efficient. It helps us to digest a lot of information and helps us to leapfrog. I’d like to emphasize on the leapfrog in South Africa. You would know that, for instance, a number of Africans are not behind in terms of education, in terms of health, in terms of transport, for instance. But if I take countries like Rwanda, Rwanda is very, very similar to the help of AI in our dreams. That supplies, for instance, to rural Rwanda. These are locations that it’s very hard to reach by car. And we are seeing an improvement in mortality rates, for instance, and this is based on AI. We are using AI in precision agriculture, for instance, in Rwanda because of Rwanda, we have new use cases. And these are things that we would not have been able to do leveraging on our current infrastructure. So for us, indeed, AI is a way to leapfrog. Just like the duality of water, it also is a part of, if we don’t govern it, then what happens? If we don’t, if we don’t know what’s going to happen, it’s going to be disastrous. It’s a duality that in 2000, in 2000 years to come, we’re going to take a lot of love. So we don’t have AI, we don’t do it from a consulting point of view. But it doesn’t mean that, don’t always remember that. We have the AI, the African Union, come up with the continental AI strategy. We’ve had, and as much as we’ve got also the blueprint, some countries like Ghana, some of the national strategies, some of the international strategies. So there are efforts towards that. The question still remains, whether this approach we are taking is going to be used. Are we going to be using it? Is it a multi-stakeholder approach? What is the strategy leading to the failure that the African Union has developed against us? It’s very important that we come together, first of all, that is harmonized, but also is fair and ethical. I think my other colleagues have spoken about it, and inclusive. For us, if AI is going to help us to leapfrog, we need to be able to make it be inclusive as well. In terms of the challenges, within the African perspective, we know that the bedrock of AI is data, right? You need a lot of data to be able to properly utilize AI. But the number of data centers in Africa equals the number of data centers in Ireland. Look at the population of Africa, 1.4 billion, and compared to the population of Ireland. So we need to increase the infrastructure. We could say that we will leverage on other people’s infrastructure, which is what we are doing now, but I believe that it takes some sort of sovereignty from us. And if we are talking about ethical AI, fair AI, using AI within your context, it’s important that data is also within our jurisdiction so we are able to properly leverage it and make sure that it’s sovereign. The second for us is the skills gap. I think five years ago, there was a craze about creating a lot of coders, but now AI can also code. So what do we do now? to be able to, I believe, for instance, that what we need in Africa is to use AI to increase and scale TVET skills. So for instance, we are talking about electric cars. We are talking about assembling phones, for instance, in Africa. Why can’t we use virtual reality generative AI to create classrooms for people to learn these skills, for instance, because we are not able to scale it up, but we can leverage AI to scale it up. But also, even within the space of software development, we still need people who can develop this AI and the robots and the rest. And the last one is also the data sets. And I think it speaks to the biases that my colleagues have spoken about. When most of the AI tools we have, have data sets that are from other regions, we do not have the data sets in Africa. So that’s one key thing that we also need to focus and build a data set so that the AI is African. It’s to our story, like we did with M-PESA. We can leverage on our own story to tell our story, because AI is as intelligent as the data you feed it. So if you feed it with other cultures, other data sets, it will always go against us. And we’ve seen some of them where we’ve applied AI tools from let’s say banking, AI banking tools from other regions that are rejecting people applying for loans just because they are of a certain race or of a certain gender. So it’s important for Africa within that perspective. I know it sounds very nascent, but it is what it is. We need to be able to create those data sets. Thank you.
Yoichi Iida: Okay, thank you very much. Very deep thoughts about the current situation of Africa and also challenges of Africa. I think many of those are shared by other people from around the world. But we also, your talk also reminded me of the speech by Minister Al-Sabaha in the opening session and also other speakers. Actually, it was very much impressive to hear many speakers talked about AI in the opening of Internet Governance Forum. So everything is emerging together and everything is related and reflected to each other. But we need to read all one after another to make the best benefit from technologies. And we need to future proof. So from that perspective, I would like to ask the viewpoint of youth generation from Levi. Thank you, Aida.
Speaker 2: I’m sure you can hear me, right? So from, I think maybe in addition, because what some of the things that the previous speakers have raised are valid and great concerns. Of course, there’s a dual aspect of it, the good and the bad. Talking about the youth perspective, I think a number of youths give a check. They have contributed largely to the development of artificial intelligence. Be it from the innovative stage and also how some of the systems are working. And the duality of it is others have used it to, for lack of a better term, like to cheat their way through, to leapfrog. But others are using it holistically and based on honest and truth, as well as transparency and accountability. Now, when you look at it from, maybe touching a bit back in Africa, majority of the youths that I think have majority of the youths that I think have used AI have among other things, been a source of data. We talk about data mining, for instance, most of the AI tools have been trained by a number of youths. Let me give an example of recently, I think some Kenyan youths were protesting because the amount they were getting to feed data systems was way lower compared to the amount of data the amount of money that these data systems are going to generate from it. So it’s an issue of the balance between you have the global North developing most of the AI systems, but the global South is being used to train and less investment in the global South is coming in. Thelma mentioned and raised an issue of data centers versus the number of the population in terms of how many people are actually using local data centers based on these advanced AI systems. A good example is I think one of the arguments that we’ve had is we have most of the platforms and systems and technologies being used by majority of Africans not developed in Africa, which means data localization doesn’t really exist in context. It can exist on paper, but in reality, data localization is not working. So in as much as we can talk about data governance, data regulation, we’re building data acts, data protection abuse based on the data that we actually don’t host, which kind of I think created disparity because you have a data act supposing to govern database and localized data, but most of the data that we’re actually working and operating on is not being hosted on local platforms. So you have a bigger gap in terms of governance of data in Africa based on the data that we actually are not hosting in the first place. So it comes back to who then hosts the most of the data and how do we create local acts or views or laws to govern the data that we’re using when it’s actually not being hosted in Africa. So when you talk about what should be then the priorities and the expectations from the youth perspective is how much influence do we have on the data that we are producing at a local level when it’s being hosted outside of Africa? Because give or take, I can use this as an example, majority of governments and civil society organizations that are based in Africa are not using local tools. For example, I think Microsoft is one of the big techs that most African countries and governments and civil society are using as a platform. Microsoft 360, a good case in point, right? We use most of the data and it’s not stored in Africa at all. Most of the data centers are not based in Africa yet the data that is being received from Africa is something that we expect to input into AI and we’re talking about governance. So I think when it comes to youth perspective, looking forward is how then do we create a balance between globalized data versus governing data from the local perspective and how much benefit then does African countries have on the data they are giving AI systems which are not being hosted and leveraged from the African perspective. I think that’s pretty much from the youth perspective aside from just adoption of AI by majority of the youth in Africa. I’ll end here for now.
Yoichi Iida: Okay, thank you very much. Very interesting perspective and it covers a lot of things. But now it’s very interesting to see, we gather for internet and we talk about AI and now we are talking about the data. So everything is related, of course, to each other and probably we also need to talk about infrastructure such as data center or computing power. And also Melinda talked about the supply chain and the business is trying to reflect the governance discussion onto the reality of supply chain. But it’s very interesting because from the government perspective, we are trying to reflect the reality of supply chain to the discussion of policy making. So it can be a kind of a very healthy mutual interaction but if it fails, the future will not be very much productive. It’s very interesting. And as government, we have been making a lot of efforts in developing a governance framework such as Melinda talked about the G7 Hiroshima Code of Conduct which we spent a lot of time with Ambassador Verdi and other colleagues from G7 countries. And also over the last few years, we had European convention, AI convention of European Council and also we had EU AI Act in place and also we had the global partnership was integrated with OECD AI community. And a lot of things happened over the last one or two years and everything was connected to actually OECD and my partner Audrey Prong had been looking after probably everything and she knows everything I believe. And I would like to ask for her comment on overview of this current situation and also the point that previous speakers touched upon. So Audrey, please.
Audrey Plonk: Thank you, Yoichi for the kind introduction and hi to everyone. Sorry to not be with you in rehab today but thank you so much for having me and the OECD on the call. panel. I’ll be brief so that we can have more discussion. But just to say, we’ve had, you know, of course, in the last couple of years, I’m sorry, we cannot hear you get a moment. Can you try again? Does this work? No? No, I’m sorry. Okay, so technical people is working on this, so please wait, and I will speak to you once again later. And now, so we talked about a lot of things, and we have touched upon data or infrastructure, and, you know, we talked about the challenges and risks, of course, as well as opportunities. And also, we heard the views from different communities. So now, we do not have enough time, but I would like to invite all the speakers to make a comment on your perspective
Yoichi Iida: about the responsibilities or roles, or what you are planning to do, or as your own community, such as business, government, industry. Okay, so I would like to invite for the last comment from all speakers, but before that, I would like to invite Audrey to try again.
Audrey Plonk: Does it work now? Okay, now I can hear you. Oh, wonderful. Thank you. I think maybe I was in the observer room and not the speaker room. Anyway, thank you for the kind introduction, Yoichi, and hi, everyone. Sorry to not be with you in person, but I’m sure you’re having a great IGF in Riyadh. Just on behalf of the OECD, we’ve obviously seen a lot of changes to the global internet governance space in the past five years, since we initially adopted our first AI recommendation, which we just revised earlier this year. And of course, as Yoichi mentioned, we’ve had the emergence of safety institutes, we’ve had changes just recently here at the OECD with the integration of the global partnership on AI into our work program, and the emergence of a lot of different policy topics, many of which other speakers talked about. Just to give a couple of examples, we know that issues of data and AI are super important. They’re critical. Everybody has said that in their own way. And our expert community here at the OECD has more than 400 people. And one of the more recent things we did in the last six months is create a group focused on privacy and data and AI. So just to say that I think the topics of the table and the issues that you’re coping with in different regions around the world, we see very much across the community that we work in, which is a broad global community. I would just say that if you’re not familiar with the OECD’s AI observatory at oecd.ai, it’s really the place where we’re trying to put as much data and evidence behind trends that are happening in AI, trends like what kind of language modelers are being built on what languages, so that policymakers can look at that data and start to shape a policy environment that implements the broad principles that I think we all agree on, things that have been mentioned before already, like around quality and fairness, around bridging divides and other things. So if you haven’t checked out the observatory, it’s a great place to look at things like where research cooperation is happening across countries, where patents are being filed, where investment is going into AI. And I think as we build that out, we have right now 70 different jurisdictions participating in the observatory and invite many more to come to come join us. I have a colleague in the room that you can talk to since I’m not there. But a big part of what we’re trying to do at the OECD is to provide as much interoperability and harmonization across different approaches, whether those be technical standardization approaches or policy approaches. That’s, I think, many people said the importance of us operating on a global space and in a global way. And we’re trying to bring our analytical and data-driven approach to AI. And I think, you know, just as we move forward into 2025, 2024 has been a very jam-packed year of AI and a lot of focus on important issues of safety, frontier models. But I also note the importance that others have said about maybe not frontier models, just the day-to-day integration of AI. And I’ll just close by saying some of the data we released earlier this year shows just how much runway there is for AI to diffuse or to be adopted across industries for a lot of potential benefit. And so, I think, you know, at best we see about an 8% diffusion rate of AI technologies and mostly in large companies. So, to make AI both more accessible and more widely adopted, we think, and we know it has to be trustworthy, but also that we have to put some of these other framework conditions in place around safety, security, and fairness to make those numbers around diffusion go up. So, I look forward to the rest of the discussion. And thank you, Yoichi.
Yoichi Iida: Thank you very much, Audrey, for the comment. And I think the remaining time is very much limited, but I would like to hear one more voice from all speakers. And we always hear, you know, inclusivity when we talk about AI governance. And we had the GDC and a lot of people are talking about AI gaps, AI divide. So, now, I think OECD is a kind of a small group with 38 member countries, but now after integrated with the GPA, they have more than 40 members, and they are welcoming more. So, there will be a more inclusive group for AI discussion. But I think France has a similar perspective on AI governance and when you are organizing action summit next year. So, I would like to ask Ambassador what will be the objective and goals of AI action summit? You have just two minutes. Hello, hello. Yes. So, in two minutes. First, maybe there is something we didn’t say enough
Andy Beaudoin: in this room, and maybe not within the IGF itself. Of course, AI is not just a very promising technology. That’s also a source of power and an intense competition between companies, they want to take the lead. That’s also a geopolitical competition between models and that’s a competition between international organizations to take the lead. We have to face this. If we don’t recognize this, we won’t do a good job. And for us, for France, for a lot of people, one of the big threats regarding the future of AI and the future of AI governance would be a fragmentation of the global governance. Because if we let a fragmentation happen, we will engage a race to the bottom, a race to the worst. Everyone, to remain strong within the competition, will propose the weakest regulation or governance. So, we have to stick together, we have to exchange. So, yes, of course, we think that the political framework of the OECD is of the utmost importance. We discuss, we integrate, we agree, and we are doing a great job within the GPI and with the OECD. But we think that we need also a universal conversation. So, the Paris Summit, February 10 and 11, so soon now, in two months, will be probably the biggest international summit ever so far. So, we did invite 110 heads of state and governments, and we do expect something like 80 heads of state and governments, and most of the heads of international organizations. And we will propose an agenda around what is a good but broad governance regarding all the holistic approach I did mention earlier. And that will be also a very intense multi-stakeholder conversation. So we do expect something like 1,000 or 2,000 delegates from research, from industrial sector, from private sector, from civil society. And we try to put the conversation about three main topics, or maybe four, because risk, security, safety institute, and even the question about catastrophic risk or existential risk still matters. But we will add three layers. Conversation regarding sustainable AI, because let’s try not to break the planet again with this new technology, unpromising technology. A conversation about, yes, broad governance, addressing all the concerns I did mention earlier. And a conversation about the needs of, what we don’t know so far, big builds and digital public infrastructure, because frankly, we don’t want these movements to be completely privatized. We need also public resources. Just to conclude with this, you know, I travel a lot. I meet a lot of researchers everywhere, including at the biggest American universities. Today, let’s face it, big research cannot reproduce the results of the biggest companies. So that’s not your fault, you are not responsible. But we want, we need a world where there is a common knowledge and where public research can reproduce at least, or maybe preempt, but at least reproduce the results of the private sector. So for this problem, we don’t have to finance, we need more money in the public sector. Okay. Thank you very much.
Yoichi Iida: So we have five minutes left, but I would like to hear one voice, it’s from one of the other speakers. Melinda, what is your expectation and what do you think you can do in developing AI governance and also AI ecosystem?
Speaker 1: Thank you. So just a couple of things I want to touch on. I think companies have significant responsibility here, clearly. I think particularly around participating in the international frameworks and initiatives that are being developed and hearing to them, working with safety institutes in their home countries to develop the research and evaluations of advanced models. I think being transparent with everyone about how their large models are developed, what they’re capable of, what the risks are, how they’re addressing risks, all of those things I think are really important and squarely on the shoulders of developers. And then I think there’s a lot of partnerships that need to be developed in terms of public, private sector, whether it’s on research capabilities, whether it’s working to develop data that’s going to be representative of the entire world, that is working with the Gates Foundation, which is not the government, but to develop African data for training. So how can we partner together to advance some of the common goals? I think that is going to be really important going forward. And I think we’re starting to get a sense of what the real needs are and the real opportunities. And then no one can do this alone, right? So everyone has their interests to further, but we need to be doing it together. So I think really getting a clear understanding of where we can partner together to advance the governance, I think, is the next phase. Thank you very much. That will be very important. And I’m going to go back to you, Aisha, and then I’ll turn it over to you. Thank you. So I’ll speak from the governance perspective. One of the key things or the key differences we’ve seen is that there’s a disparity between the governance and data and enforcement. So within the African perspective, we need to come up with ways of enforcing so that it’s effective. So there’s no point in spending so much time developing policies, governance, and enforcing. And on that, we are looking at things like giving tax breaks to companies who are compliant, setting up authorities who are autonomous, those kind of things. So enforcement is a key thing to consider. But also multi-stakeholder. I believe that AI brings the world together even more than the internet. And so there’s a need to have a universal approach to AI governance. And we fully support what was said in terms of bringing everybody, having a universal approach. And that’s what we do also at Smart Africa, to bring private sector, civil service organizations to the government. But it’s only Africa. And so we need to also to go beyond Africa to put a sort of a universal model. Thank you.
Yoichi Iida: Thank you very much. Actually, you know, enforcement is very important. And actually, this is what we are working very hard on with Audrey. Before I invite Audrey to comment on this, I invite Lidl to share what is their expectation. What do you think you can do?
Leydon Shantseko: The first one is not to be used in most of the conversation, especially when it comes to governance. I’m not very confident in as much as I have an idea of how other continents or other regions have done with regard to youth involvement. But with regards to Africa, partly there is a feeling among youths that governance, the most thing comes in to try and reduce or regulate innovative ideas before they actually thrive. So it’s something that I think from the youths, the core is for government to engage youths with an open mind so that they allow for innovation to thrive and then the safeguards in terms of what are the risks can then be talked about with the youths in the meetings or in the room. And like talking about reducing certain access, because at the end of the day, it looks more of we are trying to actually come up against innovation by bringing regulation and also the tendency to regulate something before fully understanding how big it is. It’s just, I think, one of the challenges. Secondly, it’s allowing for youths to continue innovating, even when we are talking about emerging technology. In the context that youths, I think, have been at the heart of most of the innovative ideas. So when we talk about AI and emerging technologies, we don’t have to think of the youths in terms of not knowing something, but allowing them the benefit of doubt to take the risk and allowing them to thrive and grow. Because most of the technical ideas or some of the infrastructure or developments we have as foundations, most of them were actually developed by the youths. So involving them, I think, from the start all the way to the end of the process is something that I would recommend. Or maybe also, let me say this in front of the ambassador, considering the number of heads of state who are being invited, the question is how are they doing against the youths? Because if it’s governments mostly making these policies, there’s a high chance that youths’ perspective are mostly left out in the room. And then youths are involved at a later stage when they are behind most of the innovation. So I think trying to create a balance between government perspective versus the youths’ perspective in most of the governance process becomes very critical. Thank you.
Yoichi Iida: Thank you. Sounds good. So thank you very much for the comment. And now we heard the enforcement and innovation. And I would like to invite Audrey to talk about the conclusion on behalf of myself.
Audrey Plonk: Thanks, Riti. I just want to say that governance is a lot more than regulation. Regulation is really important, but governance can include other tools. And I just want to not miss the opportunity to talk about one that we hope to be finalizing very soon this week, which is an implementation framework or a reporting framework to implement the Hiroshima AI Code of Conduct. And the purpose of this framework is to allow companies and institutions, organizations to report publicly on their activities related to the code of conduct so that we can take voluntary tools and move them from just sort of nice words on a paper to building out an ecosystem of information that can inform policy decisions. Because I think the prior speaker said rightly that, you know, we operate often in a vacuum. Those of us who work on AI day in and day out, we may know a lot, but there’s a lot of stuff we don’t know about AI. And it’s very hard to implement a good governance regulation in that vacuum. And so I think filling that vacuum in the void is an important step. And as part of that process to develop this reporting framework for the Hiroshima Code of Conduct implementation, we’ve also mapped this code of conduct to many other codes of conducts. And I think there’s a way to make these different. systems, you know, for the organizations that we’re going to be asking and hoping adopt and start to engage in this governance dialogue and more than just talking, which is also really important, but in more of information sharing in a concrete way that can help inform researchers and the public, that we make these tools as interoperable as possible. And that that’s not a negative thing for the world, it’s instead advancing a system whereby we can compare and in an, as we say, in an apples to apples way, we can compare like things together to see what’s happening at a global, at a global scale. So hope to be able to announce good news there. I know you hope so too, Yoichi, as we really work to implement some of these extremely important activities in the governance space that have taken place over the last couple of years and try to move them from, you know, out of the negotiating rooms and into the practical implementation phase. So thank you so much. Okay. So, if the General Code of Conduct is put in place in action together with a monitoring mechanism, that would be a very much experimental mechanism where the private sector and the government will work together to ensure safety, security, and trustworthiness of AI systems in voluntary work. So we know that is not the only answer, but we are making a lot of efforts to build up a governance framework, which should be inclusive and trustworthy. And we, of course, understand that this effort should be done in collaboration among the different stakeholders, such as not only the government, but the industry, civil society, academia, youth, and others. So I hope the discussion was very much productive and helpful to the audience, and I hope we continue working together towards being open, free, and not fragmented, and translating AI ecosystem globally. So thank you very much to all the speakers, and also thank you very much to the audience. Very much sorry about the audio system, but I hope you enjoyed the discussion. So thank you very much. Thank you.
Yoichi Iida: Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thanks. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Henri Verdier
Speech speed
128 words per minute
Speech length
1085 words
Speech time
505 seconds
AI as a source of progress and power requires balanced governance
Explanation
AI is not just a promising technology but also a source of power and intense competition between companies and countries. There is a need for balanced governance to ensure AI benefits humanity while addressing potential risks.
Evidence
The speaker mentions the upcoming Paris Summit in February, which will be a large international summit with 80+ heads of state to discuss AI governance.
Major Discussion Point
Current State and Challenges of AI Governance
Need for holistic, international approach to address diverse challenges
Explanation
A holistic approach to AI governance is necessary to address various challenges including security, cultural diversity, environmental impact, and intellectual property. The speaker emphasizes the importance of a universal conversation on AI governance.
Evidence
The Paris Summit will propose an agenda around broad governance, addressing topics like sustainable AI, digital public infrastructure, and the needs of emerging economies.
Major Discussion Point
Priorities for AI Governance Frameworks
Agreed with
Speaker 1
Thelma Quaye
Yoichi Iida
Agreed on
Need for inclusive and international approach to AI governance
Differed with
Speaker 1
Differed on
Approach to AI governance
Governments responsible for balancing innovation and security
Explanation
Governments have a responsibility to balance innovation and security, economic growth and equality, and efficiency and diversity in AI development. There is a need to ensure that AI progress benefits humankind as a whole.
Major Discussion Point
Roles and Responsibilities in AI Development
Agreed with
Speaker 1
Leydon Shantseko
Agreed on
Importance of balancing innovation and regulation
Speaker 1
Speech speed
145 words per minute
Speech length
461 words
Speech time
189 seconds
Open source AI models promote innovation and economic development
Explanation
Open source AI models provide flexibility for developers to customize applications to local needs. This approach is seen as beneficial for economic development by creating a diverse ecosystem of AI tools.
Evidence
The speaker mentions their company’s large language model, Lama, which is made available for free download.
Major Discussion Point
Current State and Challenges of AI Governance
Agreed with
Henri Verdier
Leydon Shantseko
Agreed on
Importance of balancing innovation and regulation
Governance should reflect realities of AI value chain and ecosystem
Explanation
AI governance frameworks need to reflect the different roles and responsibilities of actors in the AI value chain. This includes model developers, deployers, and downstream developers, each with unique responsibilities in risk mitigation.
Evidence
The speaker discusses the differences between open source and closed model providers in terms of control and visibility into downstream uses.
Major Discussion Point
Priorities for AI Governance Frameworks
Agreed with
Henri Verdier
Thelma Quaye
Yoichi Iida
Agreed on
Need for inclusive and international approach to AI governance
Differed with
Henri Verdier
Differed on
Approach to AI governance
Companies should be transparent about AI development and risks
Explanation
Companies have significant responsibility in AI governance. They should participate in international frameworks, work with safety institutes, and be transparent about their large models’ development, capabilities, and risks.
Evidence
The speaker mentions their company’s support for various AI global frameworks and commitment to publish an AI safety framework.
Major Discussion Point
Roles and Responsibilities in AI Development
Thelma Quaye
Speech speed
150 words per minute
Speech length
924 words
Speech time
368 seconds
Africa lacks necessary infrastructure and skills to fully leverage AI
Explanation
Africa faces challenges in leveraging AI due to a lack of data infrastructure and skills. The continent needs to increase its data center capacity and develop AI-related skills to fully benefit from the technology.
Evidence
The speaker mentions that the number of data centers in Africa equals the number in Ireland, despite the vast population difference.
Major Discussion Point
Current State and Challenges of AI Governance
Importance of building local African datasets and AI capabilities
Explanation
There is a need to create African datasets and develop local AI capabilities to ensure AI is relevant and beneficial to the African context. This is crucial for addressing biases and ensuring AI tools are appropriate for African needs.
Evidence
The speaker gives an example of AI banking tools from other regions rejecting loan applications based on race or gender.
Major Discussion Point
Priorities for AI Governance Frameworks
Differed with
Leydon Shantseko
Differed on
Focus of AI development in Africa
Africa needs to increase data infrastructure and sovereignty
Explanation
Africa needs to increase its data infrastructure and ensure data sovereignty. This is crucial for leveraging AI effectively and ensuring that AI governance is relevant to the African context.
Evidence
The speaker mentions the need for effective enforcement of AI governance policies in Africa, suggesting measures like tax breaks for compliant companies.
Major Discussion Point
Roles and Responsibilities in AI Development
Agreed with
Henri Verdier
Speaker 1
Yoichi Iida
Agreed on
Need for inclusive and international approach to AI governance
Leydon Shantseko
Speech speed
172 words per minute
Speech length
405 words
Speech time
141 seconds
Youth perspective highlights data sovereignty and localization issues
Explanation
The youth perspective emphasizes issues of data sovereignty and localization in Africa. There is a concern about the lack of local data centers and the implications for data governance when most data is hosted outside Africa.
Evidence
The speaker mentions that most African governments and organizations use platforms like Microsoft 365, with data stored outside Africa.
Major Discussion Point
Current State and Challenges of AI Governance
Youth involvement needed throughout AI governance process
Explanation
There is a need for greater youth involvement in AI governance processes. The speaker argues that youths should be engaged with an open mind, allowing for innovation to thrive while addressing potential risks.
Evidence
The speaker mentions the perception among youth that governance often comes in to regulate innovative ideas before they can thrive.
Major Discussion Point
Priorities for AI Governance Frameworks
Youth should be allowed to innovate while addressing risks
Explanation
The speaker advocates for allowing youth to continue innovating in emerging technologies like AI. There’s a call to give youth the benefit of the doubt to take risks and grow, while still addressing potential risks.
Evidence
The speaker points out that youths have been at the heart of most innovative ideas and have developed many of the foundational technologies we use today.
Major Discussion Point
Roles and Responsibilities in AI Development
Agreed with
Henri Verdier
Speaker 1
Agreed on
Importance of balancing innovation and regulation
Differed with
Thelma Quaye
Differed on
Focus of AI development in Africa
Audrey Plonk
Speech speed
139 words per minute
Speech length
1433 words
Speech time
616 seconds
OECD working to provide data and harmonization across AI approaches
Explanation
The OECD is working to provide data and evidence on AI trends through its AI observatory. This effort aims to help policymakers shape policy environments that implement broad principles like quality, fairness, and bridging divides.
Evidence
The speaker mentions the OECD AI observatory at oecd.ai, which provides data on trends like language model development, research cooperation, and investment in AI.
Major Discussion Point
Current State and Challenges of AI Governance
Developing interoperable governance tools and reporting frameworks
Explanation
The OECD is working on developing interoperable governance tools and reporting frameworks. This includes an implementation framework for the Hiroshima AI Code of Conduct, aimed at allowing organizations to report on their activities related to the code.
Evidence
The speaker mentions the upcoming finalization of a reporting framework to implement the Hiroshima AI Code of Conduct.
Major Discussion Point
Priorities for AI Governance Frameworks
Yoichi Iida
Speech speed
121 words per minute
Speech length
2134 words
Speech time
1055 seconds
Multi-stakeholder cooperation needed to advance governance
Explanation
The speaker emphasizes the need for collaboration among different stakeholders to build an inclusive and trustworthy AI governance framework. This includes cooperation between government, industry, civil society, academia, youth, and others.
Major Discussion Point
Roles and Responsibilities in AI Development
Agreed with
Henri Verdier
Speaker 1
Thelma Quaye
Agreed on
Need for inclusive and international approach to AI governance
Agreements
Agreement Points
Need for inclusive and international approach to AI governance
Henri Verdier
Speaker 1
Thelma Quaye
Yoichi Iida
Need for holistic, international approach to address diverse challenges
Governance should reflect realities of AI value chain and ecosystem
Africa needs to increase data infrastructure and sovereignty
Multi-stakeholder cooperation needed to advance governance
Speakers agree on the importance of a comprehensive, global approach to AI governance that involves multiple stakeholders and addresses various challenges across different regions.
Importance of balancing innovation and regulation
Henri Verdier
Speaker 1
Leydon Shantseko
Governments responsible for balancing innovation and security
Open source AI models promote innovation and economic development
Youth should be allowed to innovate while addressing risks
Speakers emphasize the need to foster innovation in AI while also addressing potential risks and security concerns through appropriate governance measures.
Similar Viewpoints
Both speakers highlight the challenges faced by Africa in terms of AI infrastructure, data sovereignty, and the need for local development of AI capabilities.
Thelma Quaye
Leydon Shantseko
Africa lacks necessary infrastructure and skills to fully leverage AI
Youth perspective highlights data sovereignty and localization issues
Both speakers emphasize the importance of transparency and responsibility in AI development, whether from companies or governments.
Henri Verdier
Speaker 1
Companies should be transparent about AI development and risks
Governments responsible for balancing innovation and security
Unexpected Consensus
Importance of youth involvement in AI governance
Leydon Shantseko
Yoichi Iida
Youth involvement needed throughout AI governance process
Multi-stakeholder cooperation needed to advance governance
While youth involvement might not typically be a primary focus in AI governance discussions, both speakers emphasized its importance, suggesting a growing recognition of the need for diverse perspectives in shaping AI policies.
Overall Assessment
Summary
The speakers generally agree on the need for a comprehensive, inclusive approach to AI governance that balances innovation with risk mitigation. There is consensus on the importance of international cooperation, addressing regional challenges, and involving diverse stakeholders, including youth.
Consensus level
Moderate to high consensus on broad principles, with some variation in specific focus areas. This level of agreement suggests potential for collaborative efforts in developing global AI governance frameworks, but also highlights the need for tailored approaches to address region-specific challenges, particularly in developing areas like Africa.
Differences
Different Viewpoints
Approach to AI governance
Henri Verdier
Speaker 1
Need for holistic, international approach to address diverse challenges
Governance should reflect realities of AI value chain and ecosystem
While Andy Beaudoin emphasizes a holistic, international approach to AI governance, Speaker 1 focuses on reflecting the realities of the AI value chain and ecosystem in governance frameworks.
Focus of AI development in Africa
Thelma Quaye
Leydon Shantseko
Importance of building local African datasets and AI capabilities
Youth should be allowed to innovate while addressing risks
Thelma Quaye emphasizes the need for building local African datasets and AI capabilities, while Leydon Shantseko advocates for allowing youth to innovate freely in AI development.
Unexpected Differences
Data localization and sovereignty in Africa
Thelma Quaye
Leydon Shantseko
Africa needs to increase data infrastructure and sovereignty
Youth perspective highlights data sovereignty and localization issues
While both speakers address data sovereignty in Africa, Leydon Shantseko unexpectedly highlights the youth perspective on this issue, emphasizing the practical challenges of data localization when most platforms used are not hosted in Africa.
Overall Assessment
summary
The main areas of disagreement revolve around the approach to AI governance, the focus of AI development in Africa, and the practical implementation of data sovereignty.
difference_level
The level of disagreement among speakers is moderate. While there is general agreement on the importance of AI governance and development, speakers differ in their specific approaches and priorities. These differences reflect the complexity of AI governance and the need for tailored approaches to different contexts, particularly in developing regions like Africa. The implications suggest that a one-size-fits-all approach to AI governance may not be effective, and that balancing global standards with local needs and perspectives will be crucial.
Partial Agreements
Partial Agreements
All speakers agree on the need for comprehensive AI governance, but they differ in their approaches. Henri Verdier emphasizes a holistic international approach, Speaker 1 focuses on reflecting the AI value chain realities, and Audrey Plonk highlights the OECD’s role in providing data and harmonization.
Henri Verdier
Speaker 1
Audrey Plonk
Need for holistic, international approach to address diverse challenges
Governance should reflect realities of AI value chain and ecosystem
OECD working to provide data and harmonization across AI approaches
Similar Viewpoints
Both speakers highlight the challenges faced by Africa in terms of AI infrastructure, data sovereignty, and the need for local development of AI capabilities.
Thelma Quaye
Leydon Shantseko
Africa lacks necessary infrastructure and skills to fully leverage AI
Youth perspective highlights data sovereignty and localization issues
Both speakers emphasize the importance of transparency and responsibility in AI development, whether from companies or governments.
Henri Verdier
Speaker 1
Companies should be transparent about AI development and risks
Governments responsible for balancing innovation and security
Takeaways
Key Takeaways
AI governance requires a balanced, holistic international approach to address diverse challenges and opportunities
There is a need for inclusive, multi-stakeholder cooperation in developing AI governance frameworks
Infrastructure, skills, and data sovereignty gaps exist, particularly in Africa and developing regions
Open source and transparent AI development can promote innovation and economic development
Youth involvement throughout the AI governance process is crucial
Resolutions and Action Items
OECD working on finalizing a reporting framework to implement the Hiroshima AI Code of Conduct
France organizing an international AI summit in February 2024 to discuss global AI governance
Efforts to make AI governance tools and frameworks more interoperable across different initiatives
Unresolved Issues
How to effectively balance innovation with regulation and risk mitigation
Addressing data localization and sovereignty concerns, particularly for developing regions
Ensuring fair representation and inclusion of diverse perspectives in AI governance discussions
How to enforce AI governance policies effectively, especially in regions like Africa
Suggested Compromises
Using voluntary reporting frameworks and codes of conduct as a middle ground between strict regulation and no oversight
Partnering between public and private sectors to develop representative data and research capabilities
Allowing for innovation while simultaneously developing safeguards and addressing potential risks
Thought Provoking Comments
Are we sure that this impressive revolution will be a progress? Not just innovation, not just power, but a progress for humankind.
speaker
Henri Verdier
reason
This comment frames AI development not just as technological advancement, but raises the crucial question of whether it will truly benefit humanity as a whole. It sets the tone for considering the broader implications and ethics of AI.
impact
This framing shifted the discussion towards considering the holistic impacts of AI on society, culture, and human progress, rather than just focusing on technical capabilities.
I’m liking AI to water. Water sort of nourishes us, it helps us, it helps us to grow our crops. In the same way, AI also helps us to be more efficient. It helps us to digest a lot of information and helps us to leapfrog.
speaker
Thelma Quaye
reason
This analogy provides a powerful and accessible way to understand both the potential benefits and risks of AI, especially from an African perspective. It highlights AI’s transformative potential while also acknowledging the need for proper governance.
impact
This comment brought attention to the specific context and needs of developing regions, leading to a discussion on inclusivity, infrastructure challenges, and the potential for AI to address developmental gaps.
We need to increase the infrastructure. We could say that we will leverage on other people’s infrastructure, which is what we are doing now, but I believe that it takes some sort of sovereignty from us.
speaker
Thelma Quaye
reason
This insight highlights the critical issue of digital sovereignty and the importance of local infrastructure for AI development, especially in the context of developing nations.
impact
It sparked a deeper conversation about data localization, infrastructure gaps, and the geopolitical implications of AI development and deployment.
We have most of the platforms and systems and technologies being used by majority of Africans not developed in Africa, which means data localization doesn’t really exist in context. It can exist on paper, but in reality, data localization is not working.
speaker
Leydon Shantseko
reason
This comment exposes a critical gap between policy intentions and practical realities in data governance, particularly in the African context.
impact
It led to a discussion on the challenges of implementing effective data governance and AI policies in regions that lack control over their technological infrastructure.
Of course, AI is not just a very promising technology. That’s also a source of power and an intense competition between companies, they want to take the lead. That’s also a geopolitical competition between models and that’s a competition between international organizations to take the lead.
speaker
Henri Verdier
reason
This comment brings attention to the often overlooked geopolitical and competitive aspects of AI development, framing it as not just a technological issue but a matter of global power dynamics.
impact
It broadened the discussion to include considerations of international relations, economic competition, and the potential for fragmentation in global AI governance.
Overall Assessment
These key comments shaped the discussion by broadening its scope from purely technical considerations to encompass ethical, geopolitical, and developmental aspects of AI. They highlighted the need for inclusive, globally coordinated governance that addresses the specific challenges of developing regions while also considering the competitive and power dynamics at play. The discussion evolved from abstract principles to concrete challenges in implementation, particularly around data sovereignty and infrastructure development. This multifaceted approach underscored the complexity of AI governance and the necessity for diverse perspectives in shaping global policies.
Follow-up Questions
How can we connect the various AI governance conversations and frameworks to ensure they work together coherently?
speaker
Melinda Claybaugh
explanation
There are many parallel conversations happening around AI safety, data privacy, copyright, and governance of different types of AI. Connecting these is important for developing comprehensive and effective AI governance.
How can AI governance frameworks reflect the realities of the AI value chain, particularly for open source AI?
speaker
Melinda Claybaugh
explanation
Different actors in the AI ecosystem have unique roles and responsibilities. Governance frameworks need to account for these differences, especially considering open source AI models.
How can we increase AI infrastructure, particularly data centers, in Africa?
speaker
Thelma Quaye
explanation
The lack of data centers in Africa compared to its population size limits the continent’s ability to leverage AI effectively and maintain data sovereignty.
How can we use AI to scale up technical and vocational education and training (TVET) skills in Africa?
speaker
Thelma Quaye
explanation
Using AI and virtual reality to create classrooms for learning technical skills could help address the skills gap in Africa.
How can we develop more African-specific AI datasets?
speaker
Thelma Quaye
explanation
Having AI trained on African datasets is crucial for ensuring AI tools are relevant and unbiased for African contexts.
How can African countries effectively govern data that is not hosted locally?
speaker
Leydon Shantseko
explanation
Many African governments and organizations use platforms hosted outside Africa, creating challenges for local data governance and regulation.
How can we ensure a fair balance between global data use and local data governance?
speaker
Leydon Shantseko
explanation
There’s a need to balance the benefits of global AI systems with local control and governance of data.
How can we increase AI adoption across industries, particularly in smaller companies?
speaker
Audrey Plonk
explanation
Current AI adoption rates are low, especially outside of large companies. Increasing adoption could lead to significant benefits but requires addressing issues of trust, safety, and accessibility.
How can we ensure public research can reproduce or preempt private sector AI developments?
speaker
Henri Verdier
explanation
There’s a growing gap between public and private AI research capabilities, which could limit common knowledge and public oversight of AI developments.
How can we develop partnerships between public and private sectors to advance common AI goals?
speaker
Melinda Claybaugh
explanation
Collaboration is needed in areas such as research capabilities and developing representative global datasets for AI training.
How can we improve the enforcement of AI governance policies, particularly in Africa?
speaker
Thelma Quaye
explanation
There’s a need for effective enforcement mechanisms to ensure AI governance policies have real impact.
How can we better involve youth in AI governance discussions and decision-making processes?
speaker
Leydon Shantseko
explanation
Youth perspectives are often left out of governance processes, despite young people being at the forefront of many AI innovations.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event

Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online