Main Session | Policy Network on Artificial Intelligence
Main Session | Policy Network on Artificial Intelligence
Session at a Glance
Summary
This discussion focused on the outcomes of the IGF Policy Network on Artificial Intelligence’s year-long work, covering key areas of AI governance. The panel explored four main topics: liability in AI governance, environmental sustainability in the generative AI value chain, interoperability, and labor implications of AI. Speakers emphasized the need for a global governance framework for AI, highlighting challenges such as the digital divide, environmental impacts, and the potential for misuse in areas like peace and security.
Key points included the importance of accountability and transparency in AI systems, with suggestions for expanding liability to both producers and operators. The environmental impact of AI was discussed, particularly regarding resource extraction and energy consumption. Speakers stressed the need for interoperability while cautioning against potential exploitation of creators’ labor. The discussion touched on AI’s impact on the job market and the need for upskilling and reskilling workforces.
Panelists debated the feasibility of a global AI governance regime, acknowledging the challenges of multilateralism but emphasizing its necessity. The importance of common standards and definitions was highlighted, along with the need for both global cooperation and domestic policy implementation. Speakers also addressed the need for a clear definition of AI and the challenges of regulating a rapidly evolving technology.
The discussion concluded with calls for responsible collaboration, prioritizing sustainability in AI development, and the importance of considering AI’s impact on humanity as a whole. Participants emphasized the need for a comprehensive approach to AI governance that addresses technical, ethical, and societal implications.
Keypoints
Major discussion points:
– The need for global AI governance and cooperation, while balancing domestic policies
– Liability and accountability issues related to AI systems and their impacts
– Environmental and sustainability concerns around AI development and deployment
– Addressing inequality and capacity building to ensure equitable AI access and benefits
– Defining AI and its applications to enable effective governance
The overall purpose of the discussion was to examine key issues and recommendations around AI governance, based on a report by the IGF Policy Network on AI. The speakers aimed to explore how to implement responsible AI development and deployment on a global scale.
The tone of the discussion was largely serious and concerned, reflecting the gravity of the challenges posed by AI. However, there were also notes of cautious optimism about AI’s potential benefits if governed properly. The tone became more urgent towards the end as speakers emphasized the need for swift action on global AI governance.
Speakers
– Sorina Teleanu: Moderator
– Amrita Choudhury: Policy Network on AI coordinator
– Jimena Viveros: Managing Director and CEO of Equilibrium AI, member of the UN Secretary General’s high-level advisory body on AI
– Anita Gurumurthy: Executive Director of IT4Change
– Yves Iradukunda: Permanent Secretary, Ministry of ICT and Innovation of Rwanda
– Brando Benifei: Member of the European Parliament and co-rapporteur for the EU AI Act
– Meena Lysko: Founder and Director of Move Beyond Consulting and Co-Director of Merit Empower Her
– Muta Asguni: Assistant Deputy Minister for Digital Enablement, Ministry of Communication and Information Technology of Saudi Arabia
Additional speakers:
– Ansgar Kuhne: Affiliated with EUI
– Riyad Najm: From the media and communications sector
– Mohd: Online moderator
Full session report
Expanded Summary of IGF Policy Network on Artificial Intelligence Discussion
Introduction
This summary reports on a discussion focused on the outcomes of the IGF Policy Network on Artificial Intelligence’s year-long work, covering key areas of AI governance. The panel explored four main topics: liability in AI governance, environmental sustainability in the generative AI value chain, interoperability, and labour implications of AI. The discussion brought together experts from various sectors, including government officials, policymakers, and industry representatives, to address the complex challenges posed by AI development and deployment on a global scale.
Key Discussion Points
1. Liability and Accountability in AI Systems
The discussion emphasized the importance of establishing clear liability and accountability mechanisms for AI systems. Anita Gurumurthy, Executive Director of IT4Change, called for liability rules that encompass both producers and operators of AI systems. Ximena Viveros, Managing Director and CEO of Equilibrium AI and member of the UN Secretary General’s High-Level Advisory Body on AI, stressed the importance of state responsibility throughout the AI lifecycle, while also noting the challenge of allocating responsibility given the opacity of AI systems.
Brando Benifei, Member of the European Parliament and co-rapporteur for the EU AI Act, highlighted the need for transparency to address liability issues in the AI value chain. The speakers agreed on the importance of accountability and transparency in AI systems, emphasizing the need for explainability to ensure proper governance.
2. Environmental Sustainability in the AI Value Chain
The environmental impact of AI was a significant point of discussion. Meena Lysko, Founder and Director of Move Beyond Consulting, highlighted the environmental impacts of AI infrastructure and resource extraction. Muta Asguni, Assistant Deputy Minister for Digital Enablement, Ministry of Communication and Information Technology of Saudi Arabia, provided concrete data on the projected increase in electricity consumption by data centres, emphasizing the urgency of addressing AI’s environmental impact.
The discussion also touched on the potential of AI to support sustainable development goals, highlighting the complex nature of AI’s effects on society and the environment.
3. Interoperability and Global Cooperation
The importance of interoperability and global cooperation in AI development and governance was a recurring theme. Yves Iradukunda, Permanent Secretary at the Ministry of ICT and Innovation of Rwanda, stressed the importance of partnerships to bridge divides in AI development, particularly between developed and developing nations. Benifei emphasized the need for common standards and definitions for AI globally.
Lysko called for sincere collaboration on responsible AI development, while Asguni highlighted the challenge of regulatory arbitrage between countries. This underscored the need for a coordinated global approach to AI governance while recognizing the difficulties in achieving this given varying national interests and capabilities.
4. Labour Implications and Social Impact of AI
The social implications of AI, particularly its impact on labour markets, were discussed. Benifei stressed the need to consider AI’s impact on labour markets and the importance of upskilling and reskilling workforces. Iradukunda emphasized the importance of addressing inequalities in AI adoption and access, particularly in the Global South.
The discussion highlighted the need for a whole-of-government and whole-of-society approach to AI governance, as well as the importance of capacity building and awareness to ensure equitable AI development and deployment.
Additional Key Points
1. Global AI Governance and Regulation: The need for a global governance framework for AI emerged as a central theme, with speakers emphasizing the importance of international cooperation while acknowledging the challenges of multilateralism.
2. Defining AI: The challenge of defining AI for governance purposes was raised by audience member Riyad Najm, highlighting a fundamental issue in AI regulation efforts.
3. AI’s Impact on Peace and Security: Multiple speakers raised concerns about the potential misuse of AI in military applications and its implications for global peace and security.
4. AI Chatbot for Report Interaction: The creation of an AI chatbot to allow interaction with the report’s contents was noted as a practical tool for disseminating the findings.
Unresolved Issues and Future Directions
Several key issues remained unresolved, including:
1. How to achieve a binding global treaty or governance framework for AI
2. Balancing proactive and reactive approaches to AI regulation
3. Addressing regulatory arbitrage between countries, especially between the Global North and South
4. Defining AI in a way that allows for effective governance
5. Ensuring transparency and explainability of AI systems for accountability purposes
6. Protecting against misuse of AI, especially in military applications
Conclusion
The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreement on the need for global cooperation and comprehensive governance frameworks, differences in emphasis and approach underscored the difficulties in achieving a unified global strategy. The conversation emphasized the importance of considering AI’s impact on humanity as a whole, balancing its potential benefits with the need to mitigate risks and ensure equitable access and development.
As summed up by a quote from the UN Secretary General, shared by moderator Sorina Teleanu, “Digital technology must serve humanity, not the other way around.” This encapsulates the overarching goal of AI governance efforts discussed in this panel: to harness the potential of AI for sustainable development and societal benefit while addressing the significant challenges it poses to global governance, environmental sustainability, and social equity.
Session Transcript
Sorina Teleanu: Welcome to the main session of the IGF policy network on artificial intelligence. We have about one hour and 15 minutes to go through the main outcomes of a work that has been happening for about a year. And before I go and introduce our guests, I would like to invite you to join me in welcoming Amrita and tell us a bit about the work that has been going on for this year behind the policy network on artificial intelligence. Amrita?
Amrita Choudhury: Good afternoon, everyone, and thank you for coming. I agree with Sorina, the room is too large, the audience is too far away for all of us, but thank you for coming to this policy network on AI’s main session. Just to give you a background, the policy network on AI originated from the 2020 IGF, which was held in Addis Ababa, where people, the community thought that there should be a particular policy network which works on AI issues, especially related to governance, with a focus on global south. And so the first year, that is last year, we produced a report, and you can go and see it in the P&A website, and this is a multi-stakeholder group which actually decides what is going to be discussed, how it is going to happen, and how the report is formed. We have a few of the community members also sitting here, and this year, we had four subgroups, interoperability, sustainability, liability, and labour-related issues. Some of the community members who have been very active, of course, all community members have worked in their capacities, and they are all volunteers, but some names which I would like to mention is Caroline, Shamira, Ashraf, who is also the online moderator, Yik Chang Chin, Olga, Shafiq, and Herman, Olga Kavali, they were, you know, great leaders of the various subgroups. We also thank all the members, volunteers, proofreaders, and our consultant, Mikey, who is working behind the screen, and I think she’s sitting there, and our MAG coordinator, who is also sitting there for all the hard work which has been put in. If you want to see the report, it is there online, and if you are in the Zoom room, it would be put into the chat, and I think Sorina also has something planned. With that, I will pass it on to Sorina and our panellists.
Sorina Teleanu: Thank you so much, Amrita. I’m going to try to get closer to you as well, because that feels a bit odd, and the light is exactly on me. So we heard a bit about the work of the Policy Network on Artificial Intelligence, and I’m sure you have heard lots of talks about AI these days. We are deploying, in fact, reporting, and you can probably also guess it, the main word over the past two days at the IGF has been, obviously, AI. So it’s obviously talked about quite a lot, and it is in this context that we will be trying to unpack some of the discussions around artificial intelligence, more specifically around AI governance with our esteemed guests that I’m going to introduce briefly. But before that, let me tell you also a few words about the report. Amrita mentioned it’s available online, and it is the result of a one-year long process, so I do kindly encourage you to take a look at this, at the summary, executive summary. So the report covers four main areas. One is liability as a policy level in AI governance. The other is environmental sustainability within the generative AI value chain. Then the third area is around interoperability, legal, technical, and data-related. And the final area covered in the report is on labor implications of AI. So now I’m going to ask the obvious question. Has anyone here even tried to open the report before joining this session? Am I seeing a hand? Oh, I’m seeing a few hands. Excellent. Thank you for doing that. I also have some, I hope, good news for you here and also for our colleagues who have been working so long on this report. Our attention spans, you know, it’s kind of limited these days, and reading a 100-something page report might not be the first thing we want to do. But I have a gift for you, and that’s an AI assistant. We talk about AI. Let’s also walk the talk a bit. So what my colleagues at Diplo Foundation have been doing is to build an AI assistant based solely on the report. So you can go online and actually interact with the AI assistant and ask questions about the report and its recommendations. We’re going to share the link, and you can access it during and after the session as well. So, and I’m pretty sure colleagues who have been working on the report would be looking forward to hearing your feedback as well on what is written there. Let me turn back to our guests. I’ll introduce them briefly. And then the plan for the session is to hear a bit from them about the four main areas of the report. Also hear from them about how they see the recommendations of the report and where do they think this recommendation could be going moving forward so they actually have an impact in the real world. And then we do hope to have a dialogue. Although this room might not be very inviting for the kind of dialogue we’re hoping, I will be looking at you, and I hope there will be a few raised hands in the room. So let me do what I have been promising for quite a while. In no particular order, we have Jimena Viveros Thank you, Jimena. Managing Director and CEO of Equilibrium AI, and also member of the UN Secretary General’s high-level advisory body on AI, which produced another excellent report that I do encourage you to take a look at if you haven’t yet. Then we have online with us Mina Lisco. Thank you, Mina, for joining us. Founder and Director of Move Beyond Consulting and Co-Director of Merit Empower Her. Also online, Anita Gurumurthy, Executive Director of IT4Change. Thank you, Anita, for joining. Back to the room, we have Yves Iradukunda, Permanent Secretary, Ministry of ICT and Innovation of Rwanda. Thank you for joining. Brando Benifei, Member of the European Parliament and co-rapporteur for probably the most famous piece of legislation on AI at the moment, the EU AI Act. And Muta Asguni, Assistant Deputy Minister for Digital Enablement, Ministry of Communication and Information Technology of Saudi Arabia. Thank you so much for hosting us this year. And we also have an online moderator for our participants online. Mocht will be giving us feedback and input from the online room. So again, in no particular order, I’m going to invite our guests to reflect on a section of the report and try to look also at the recommendations, if possible, and tell us how you see these recommendations moving forward. And I’m going to do an absolutely random pick. Anita, would you like to start?
Anita Gurumurthy: Sure, I can do that. Am I audible? Okay. Thank you. I just wanted to commend the report, and especially for the four focus areas. Those really come with a lot of insights and also reflect the state of art analysis, especially on crucial but often neglected areas of environment and labor. Also, I think it takes up two very, very difficult areas. One is the whole idea of liability, and the other is the idea of interoperability. I’ll focus on these two, because I’d like to really zoom in on what I think we should be looking at in this domain. What would be interesting and useful is for the report to enlarge its remit in terms of liability rules, which should apply to both producers and operators of systems. Because a fairly invested level of care is needed in designing, testing, and employing AI-based solutions. And we need to understand that while producers control the product’s safety features, and producers look at how interfaces between the product and its operator can really be improved, let’s take the whole context of social welfare systems or the government employing systems. So in that case, the operator of the system also is implicated in decision making around the circumstances in which systems will be put to use. And these are real world situations, and here I think it’s really important that operators also become liable and bear some of the associated costs when risks become actual harms. So that’s one thing. The second is a particular thought that I have around the training that we do of the judiciary, the training that is needed for lawmakers, policy makers, et cetera. Here I think that the elephant in the room cannot be disregarded, and that is really the whole absence of global space to make certain decisions. We are particularly concerned not only about the opacity of algorithms, but the fact that such opacity often in cross-border value chains, you know, in trade, in services, for instance, get compounded because of trade secret protections. So trade secret claims over the technical details of AI can really become an obfuscating mechanism and limit the disclosure of such information systems. I want to draw your attention to a recent paper from CIGI in Canada where a landmark case about Lyft and Uber came up to the court and the Washington Supreme Court ruled that reports in question maintained as trade secrets by Lyft and Uber qualify actually as public records. And in the public interest, they have to really be put out. So we have to look at this very carefully. The other thing I want to say is that the recommendations in the environment section could also look at very useful concepts coming from international environmental law, you know, the Biodiversity Convention, common but differentiated responsibilities because the financing that is needed for AI infrastructures will require us to adopt a gradient approach. Some countries are already powerful and some not. So that’s very important. I’d also like to just focus a little bit, maybe one minute or one and a half minutes on the vital distinction between interoperability that’s a technical idea and interoperability that’s a legal idea. I think if you look at interoperability, sometimes while calling for this important principle, you know, it’s like openness. We have to be careful about who we are making something open for or whether there is public interest underlying such openness. So interoperability often enables systemic exploitation of creators labor, right? So oftentimes, if we don’t have guardrails, the largest firms tend to cannibalize innovation. So I would like to actually conclude by saying that we should really look at technical interoperability and policy sovereignty as not, you know, things that are polarized, but we should work towards a framework in which many countries can participate in global AI standards. My last comment would be a fleeting remark about the wonderful chapter on labor, which could perhaps do with one addition about the idea of cross-border supply chains in which labor is implicated in the global south. And the fact that while guaranteeing labor rights, we really need to understand that working conditions in the global south include subcontracting and therefore transnational corporations must also be responsible in some way when they outsource, you know, in labor chains, AI labor chains to third parties or subcontracts. So that we are actually looking at responsibility in the truest sense of the term. I’ll stop here, thank you.
Sorina Teleanu: Thank you, Anita. We’re already adding more keywords to the ones that we already have in the four main sections of the report. And I have two on my list right now, that’s transparency and responsibility. I’ll be adding more during the discussions and at the end, we’ll see what were the keywords in this debate. So we’re moving from the global south to the global north. And I’m going to invite Brando to provide his reflections, also because they relate to what Anita has been talking about interoperability and cross-border supply chains. So, Brando, you have the floor.
Brando Benifei: Thank you. Thank you very much. First of all, I’m really happy to be able to talk in this very important panel because clearly on AI, we need to build global cooperation, global governance, and we need to examine together what are the challenges. And in fact, the impact on labor and opportunities of having an interoperable technological development around AI are some of those challenges. In fact, I think that the fact that we have chosen, which was a debated choice, it was not obvious, we have chosen to identify the use of artificial intelligence in the workplace as one of the sensitive use cases that in the AI Act is regulated to try to build safety and safeguards for workers, for those that are impacted by AI in the labor place, in the place where they work, is one important direction. And also, at a larger policy point of view, clearly the impact of AI on the labor market is already very impactful. So we need to build common strategies to manage the change in how the workforce will be composed. In fact, I think we could compare the AI with the impact that is already showing, consider that we are only two years into the generative AI revolution, to some extent, we can call it like that, it’s only two years when it reached the general public. And we will see what will happen in a short time after. So the impact is already strong. We need to consider the change that is happening, like when electricity was introduced. Sometimes I hear it’s like with internet. No, because internet is not as pervasive as AI can be. AI can change every workplace, every dynamic of labor. And it’s like the invention of electricity. It’s like the use of the steam in the development of pre-industrial automatic processes. We can look at that with that eye, I would say. And then that’s why we need global governance. We need rules because the impact on our societies is in fact even larger, not just on labor. But obviously, I say that as one who negotiated a regulation that dealt with market rules, we need to build a set of policies which are fiscal policies which are budgetary policies, permanent lifelong learning policies that are able to deal with these changes. And I really believe that we need to build common standards, common definitions. We are working on that in various international fora so that we can have more interoperability. In fact, you know that the EU has been leading on pushing against those that limit interoperability. In fact, one other legislative act of the EU, the Digital Markets Act, is also targeted at increasing interoperability. And we think that this is crucial if we want our different parts of the world to work together and to find solutions between our different businesses that can have our AI cooperate together, work together, not be in different silos and be separated. I don’t think this would be good for our economies, but also for the global understanding. We need AI to be also respectful of different traditions, different histories. I say that because we risk instead, because of the dynamic of how the training of AI happens to have a very limited cultural scope. And I say that from Europe. So it could be even more applied to other parts of the world, I would say. So I think this is something that are, I mean, these are some of the challenges we are in. I strongly believe that. that we need to combine, and I conclude on this point, two different efforts. On one way, domestic policy, in the sense that we need to have our own rules on how we deal with AI entering into our society. And there can be different models, there will be different models, but we can build some common understanding. For example, and this applies again also to the labor topic, we have built some common language, also looking at the work of the UN on the issue of the risk categorization. The idea of finding different levels of risk attached to different ways of using AI as a common way of looking at how we use AI. And on the other end, I think we also need to concentrate on where we need to work on a supranational level, because there are issues where we cannot find solutions without working over the borders. I mentioned one thing that is outside the two topics of labor and liability, but I think it’s especially important to mention it. To conclude, it’s the issue of the security and military use of AI. I think it’s very important that we work on that, because all the other actions are not effective if we are not able to control AI used as a form of a weapon or a form of security in all its implications. So I think these are some of my reflections on the topic. Thank you very much.
Sorina Teleanu: Thank you also for covering quite many topics. The good news on your final point about the discussions on the security and military implications of AI is that there is a debate at the UN General Assembly on a potential resolution for that. So for anyone in the room who belongs to a government, do encourage your Ministry of Foreign Affairs to be part of this discussion, because as Brando was saying, it is important to have this as some sort of a universal agreement at the UN level. On the interplay between global governance and rules and domestic policies, I hope we can get back to that a little later in the session and I’m hoping to also hear reflections from the room because that’s a very important point. Okay, if we agree on something at an international level, what next and how can we implement those policies locally at the national and regional level as well? And I also liked your point about common standards and definitions. It’s not easy to agree on these things at regional level. At an international level, it’s a bit more complex as well, but it would help when we discuss, again, the interoperability, liability issues and all these things that have been raised so far. Let me move on to Jimena because she’ll also speak about liability.
Jimena Viveros: Hello, thank you very much. It’s a pleasure to be here with all of these distinguished speakers and the audience. So, first of all, I would like to highlight what Brando was saying before about peace and security because I think that is key. And as a commissioner for RE-AIM, which is the global commission for the responsible use of AI in the military domain, we like to expand this into the broader peace and security domains because the implications of AI, obviously we know it in the civilian space, in all types of different forms, but in the peace and security domains, it’s not just limited to the military. So we can see that in civilian actors that are state actors, such as law enforcement and border controls, and we can also see it in non-state actors that are also civilian, which can range from terrorism, organized crime, mercenary groups, and just rogue actors. So it’s very important to also look at it from all of these dimensions because they do have a very destabilizing effect internationally, regionally, and at every type of level because of the scalability and the easy access, the proliferation of it all. So that’s why accountability and liability is so important. So the report is great, and it really tackles a lot of the good topics about liability. However, the report only focuses on liability in terms of administrative, civil, and or product liability. It was a deliberate choice to exclude the criminal responsibility, but I would also go a little bit further also and say that we need to look at state responsibility as well for the production, the deployment, the use in the entire life cycle, basically, of any of these AI systems. I think it’s very accurate that this liability part is the first section of this report because it’s extremely important. Why? Because we need to, in the current landscape that we’re living in, where international law is pretty much blatantly violated with complete impunity all the time, talking about accountability seems like fairy tales, but it’s really important to uphold the rule of law, to rebuild the trust in the international system, which is at a critical moment right now. Also for the protection of all types of human rights for all types of people, especially those in the Global South. I am Mexican, and the Global South, we are disproportionately affected by these technologies, both by the digital divide, but by the deployment of it, and the fact that we are basically consumers, not developers, also influences greatly how we are affected by the technology. It also matters because there is a deterrent effect when we’re talking about accountability in the criminal domain, especially, that really deters, and this deterrent effect helps promote safe, ethical, and responsible use, development, and deployment of AI. And it also allows for remedies for harm. These mechanisms are very important, and should be included
Brando Benifei: in every type of accountability framework, because we do have a lot of problems that stem from AI in terms of liability, accountability, which I prefer the term accountability because it’s more encompassing. So we have the atomization of responsibility. Obviously, there’s so many actors that are involved throughout the entire life chain of these technologies, and both enterprises, people, and also states as a whole. That’s why I involved the state responsibility. I identify three categories, so the users, the creators, and the authorizers, and everyone, but they’re not mutually exclusive. Each type of responsibility, it can be allocated on its own, and should be allocated on its own. Obviously, what was mentioned, the opacity, the black box, it also affects the proper allocation of responsibilities to each one of the actors, and the fragmentation of governance regimes, because what we’re witnessing now is just kind of forum shopping to whichever jurisdiction is more amenable to your purposes, and that’s where you set up, or that’s where you operate, and so on. So that’s why a global governance regime is extremely important, because these technologies are transboundary, as has already been said. So having a patchwork of initiatives is completely insufficient, and also the regimes that we have right now for reporting, for monitoring, and verifying everything that could eventually lead to some type of accountability, they’re all based on voluntarism, and in my opinion, that’s absolutely insufficient. It’s ineffective. At the OECD, we have this monitoring of incidents and framework where it’s obviously based on self-reporting, and we have witnessed there that the lack of transparency and the lack of accuracy in this type of systems of voluntarism is just not gonna work, and it’s absolutely unsustainable. Also, the type of self-regulation that is being used, or self-imposed by the industry sector, is also not gonna work if we don’t have actual enforcement mechanisms,
Jimena Viveros: so in a centralized authority to do so, because if we go, again, state by state, it’s really not gonna be very efficient. So I think we all have the general notion of what accountability is, and what it means, and why it matters. We just need to find the solutions, and the willingness to do so, because everyone should be accountable throughout the entirety of the world. the entire life cycle of AI. And I’ll leave it here, but I’m happy to expand on some issues later. Thank you.
Sorina Teleanu: Thank you, Jimena. I think we’re already collecting suggestions for the police network to continue working on these issues next year, and I’m taking notes of some of the areas that could be in focus. You mentioned the impact of AI on peace and security broader and going beyond the military domain. This notion of state responsibility and liability, and then the fragmentation of AI governance. And I’m going to put a question out there that I hope we can explore a little later with everyone in the room as well. The idea of a global governance regime. Is that feasible? How feasible actually it is? And what can be done concretely to get there? We all know that the appetite for multilateralism these days is not as much as we might want it, but maybe it’s not all lost. All right, let me continue with our speakers, and I’m going to invite Yves to continue, please.
Yves Iradukunda : Thank you, and good afternoon. It’s great to be here in this critical conversation, and thanks to the Internet Governance Forum for inviting us, and particularly commendable work that the Policy Network on AI has done, and the report that offers really good recommendations that should, if implemented, if guiding our engagements going forward, should make a significant impact. This conversation is very critical, and as I hear my fellow panelists share their reflection on the report, but also insight on their respective context and work. It challenged me to think about, we can’t talk about responsible AI as AI isolated from everything else that we do in our lives. I think when you think about AI as a technology, we also need to reflect about why AI to begin with, why technology to begin with, and what has been the impact of technology all along, before even the AI came in. I think if we reflect on that, then AI is just not a new concept from a perspective of its impact on our lives on a day-to-day. I say this because technology has been able to help advance innovation, solve different challenges, and help tackle some of the issues that we have. But at the same time, technology has driven some of the inequity issues. I think as we reflect particularly on AI today, we also need to really acknowledge the fact that if the foundational values of why we do technology are not revisited, it’s not just about AI, it’s about the values of our society altogether. But since we are focusing on AI, just allow me to reflect. From a perspective of Rwanda, we always ask ourselves to what extent, to what end, what is the end goal? And we focus primarily on the impact we want to have on our citizens. And whether it’s AI or any other emerging technology, we really want to see it as a tool, a tool that we use to improve the lives of our citizens, whether it’s in healthcare, education, and agriculture, where we are really prioritizing our investments to use AI leveraging, addressing the gaps that we have. And so what we’re seeing as an outcome is really leveraging technology investments, and also early success of AI to bridge the gaps we see in equity and inclusion, but most importantly, improving the lives of our citizens. So the themes for today’s discussion, whether it’s focusing on interoperability in governance, looking at environmental sustainability, or the issues around accountability, and the impact AI is having on labor, all of this can be addressed if we again zero in on the impact we want to have on our society. I think unlocking AI’s full potential, I would agree with what has been said before. All these values and ethical guidelines and the principles that guide how it’s implemented have to really be guided. There has to be consensus and dialogue on how we deploy the different solutions ethically. And I think it was said earlier on, the responsible approaches have to really understand the different players that are accessing the AI tools. And the standards should follow those values and should protect against ill-intention of the use of AI. So when I look at the report and the different recommendations, I find confidence in this global community within the policy network. But again, for this session, I really want to call upon the leaders in the room, the technology specialists, and corporate companies that are deploying these tools to really follow these recommendations, but most importantly, figure out what is it that we want to do for our society. And so whether it’s coming to building capacity, I think it’s something that we need to double down on. Right now, there is inequity in terms of how different countries are adopting AI. The talent is probably available in all countries, but in terms of access to the tools, in terms of awareness, there is a big disparity. So I think even as we speak, most of the people across the globe may have limited understanding and appreciation of the impact AI is going to have on their lives. So I think building capacity should start with awareness, and then the deployment of the AI tools should really be focused on improving people’s lives at all levels, whether it’s the highest advancement in security, as it was just said, whether it’s in medicine and other applications. We also need to think about how does it affect farmers in their respective society in different levels. I think we should foster partnership as we follow the implementations from this report. Like I said, it’s not just for governments or corporates alone or international organizations. I think we need to really bring partnerships forward to make sure that we bridge the divide and accelerate innovation across all levels. And finally, I think we should really commit to the adoption of these policies within our respective jurisdictions. I think the boundaries of the impact of AI are limitless. So even if you look at environmental impact of AI, you will not know boundaries. So the innovation around manufacturing of equipment that is used for AI solution, looking at energy solutions that are renewable and also limiting applications of AI that are really against environmental approaches should also be limited. So to conclude, I think it’s really commendable working on the report, the recommendations. I think a lot of insights have been already shared here on the panel, but really a call on all the leaders here present to really put at the center the impact on the people, on the citizen, respectively, and really think about how does UI serve that purpose of improving their lives.
Sorina Teleanu: Thank you so much, Yves. Also for bringing the focus back to issues on inequality, access, capacity building, how do we bridge the divides that we see actually growing instead of shrinking. And I pretty much like your question, to what end? Where are we going with this, whatever we call it, technological progress? And while listening to you, I was just reminded of two quotes we came across the other day while we were going through the discussions of the many sessions. And I just want to read them quickly, and again, hoping we can reflect on them a little earlier. Again, building on your point about, okay, to what end? One came from the Secretary General of the UN, who’s actually convening this forum, and it was very simple but also very powerful. Digital technology must serve humanity, not the other way around. We might want to think about this a bit more as we develop and deploy AI technologies. And the second one is a bit more elaborated, but along the same lines. Are we sure that the AI revolution will be progress? Not just innovation, not just power, but progress for humankind. I’m hoping we can also have a bit more reflection on this here, but also beyond in our broader debates on AI governance. I’m going to move online and invite Mina to provide her intervention. Mina, over to you.
Meena Lysko: Thank you. Thank you very much, Serena. Maybe I could start with first thanking the Internet Governance Forum’s policy network on Artificial Intelligence for actually organizing this very important discussion and for inviting me to be part of it. I appreciate the chapter on environment sustainability and generative AI. I’d like to maybe first paint a vivid picture where this picture as well as other scenarios, I am of firm belief, have actually been the premise for the Internet Governance Policy Network on AI. So, as it stands, the global south is indiscriminately impacted by generative AI and its associated technologies. The global north economies are strengthened largely by providing technologically advanced technologies and technologically advanced solutions which are taken up worldwide and at the same time, the global north have the resource and time to implement and enforce policies which will protect the local environments. The entire day is not necessarily spent on hard and hazardous labor, just to get food into the mouths of the poor. This may not be the same with poorer and developing countries. Just as with our plastic pollution, we will see greater disparities in impact of non-green industries on the environments of the most vulnerable. To illustrate this view, I will use the example of generative AI. The automotive industry is being transformed by the integration of electric vehicles, software-defined vehicles, smart factories, and generative AI. Identifying red flags related to environmental harm across the entire value chain of electric vehicles is crucial to sustainable development. So, permit me, key red flags are the biodiversity loss from mining raw materials. Generative AI relies on large-scale data centers, GPUs, and other computational hardware, as well as all of us with, for example, our smart phones, all of which require metals and minerals like lithium, cobalt, nickel, rare earth metals, and copper. Extracting these materials impacts local ecosystems, wildlife, and the broader environment. Let’s look at this from the perspective of deforestation and habitat destruction. Cobalt mined in the forests of the global south country, the Democratic Republic of Congo. Cobalt is a chemical element used to produce lithium-ion batteries, for example. The country has seen genocide and exploitive work practices. The cutting down of millions of trees, in turn negatively impacting air quality around mines. More so, cobalt is toxic. The expanded mining operations result in people being forced from their homes and farmland. According to a 2023 report, forests, highlands, and lakeshores of the eastern DRC are guarded by armed militias that enslave hundreds of thousands of men, women, and children. The destruction of forests due to cobalt mining reduces the earth’s natural carbon footprint. Cobalt mining reduces the earth’s natural carbon sinks, which are crucial for mitigating climate change. Let’s also be reminded of the negative impacts of copper and nickel mined in the Amazon rainforests. The escalating nickel extraction in Indonesia, as well as lithium mined in Chile’s Atacama Desert. So, besides the biodiversity lost from mining raw materials, we can also look at and explore water pollution from material extraction, processing, and battery disposal. The carbon footprint from energy intensive production, assembly, and charging. Waste generation at every stage, including battery disposal and component manufacturing. The social and ethical issues like child labor and mining, hazardous working conditions, and green washing. Addressing these red flags requires stricter regulation, sustainable sourcing, clean energy use, and investments in circular economy practices. We need to be extra mindful of the impact of batteries on the environment in the longer term. We are presently having to manage disposal of electronic waste, including the plastic. These impregnate our vital land and waters, but still at a micro and nano level. If we fast forward a few decades from now, the battery waste bodes to be far more unmanageable. As we are now looking at seepage of fluids into our ecosystems. So, the policy network on artificial intelligence policy brief report provides seven multi-stakeholder recommendations for policy action. I’d like to emphasize that in developing a comprehensive sustainability metric for generative AI, that’s recommendation one, the standardized metrics must have leeway to adapt. To take into consideration our rapidly evolving digital space. Today, we are having to look at the repercussions of elements such as cobalt, nickel, and lithium. We are having to consider greener technologies to meet a nominal energy demand relating to generative AI. A decade, or even a few years from now, our targets will likely be completely different. Also, if I can add one more, I suggest that we have, in addition to the seven recommendations, an outlook into the impact of the environment. Because we have moved beyond just terrestrial. We are mining outer space. So, the global space race for mining resources to quench our generative AI thirst needs also consideration. I’d like to pause there for now. Thank you very much. Thank you also, Mina. Thank you for making us think of more right in front of us issues that sometimes we tend not to see just because they’re right in front of us. Thank you for raising more awareness about the use and misuse of natural resources here on earth, but also in outer space. That’s not something we talk so much in AI governance
Sorina Teleanu: discussions. But it is a very important point also because we don’t necessarily have a global framework for the exploitation of space resources. And it would probably be better to start thinking about that sooner rather than later. Because as Mina was saying, we do see a lot of competition for the use of resources for the development of AI and other technologies. So, thank you so much for bringing that up also. Moving on to Mutas for your reflections, please.
Muta Asguni: Thank you so much, Serena. Really happy to be here with you guys on this session at IGF. I think there is a lot of ambiguity and uncertainty, as you mentioned, Serena, surrounding AI. This is not just the talk of the hour and not just the talk of the hour in Saudi. This is the talk of the minute and second everywhere in the world. And before I talk about the paper and the report, I want to take a step back and look back at history. Because history is not just the greatest teacher. It’s also the greatest predictor of the future. We as human beings, as global society, as a united nation, we’ve been here before. Four times. We’ve been here before in the first industrial revolution with the transition from agriculture to industrialization. Then again with the introduction of electricity. And again with the introduction of computers. And then on the fourth industrial revolution with the introduction of the Internet. And finally now, we’re on the cusp of the fifth industrial revolution with the transition from the digital age to the intelligence age. Each one of the previous four industrial revolutions had profound impact on three specific aspects. On infrastructure. On society, mainly on labor. And on policy. Let’s take electricity as an example. As Brandy just mentioned. When electricity was introduced, we had to develop a lot of new infrastructure to deliver electricity to every home, to give a chance to everyone to be able to harness its power and use in a safe and robust manner. When we talk about electricity and its impact on the society and jobs, the jobs market was never the same before and after electricity. It changed forever and we’ve adopted, adapted and prospered together with electricity. We’ve up-skilled and re-skilled our economies and our people to be able to leverage that technology for the greater good. In terms of policy, we’ve developed standards, we’ve developed frameworks. We as an international community came together in order to build a robust and meaningful framework that we can all work together on for the greater good in the use of electricity. AI is not going to be different. If we look at the same three lenses in the AI perspective, let’s take infrastructure for an example. Today, we’re using about 7 gigawatts of electrical power in data centers in the world today. This is projected to grow to 63 gigawatts by 2030. In just five years, we’re expected to grow and consume 10 times the electricity that we consume today for the use of data centers. This will have a profound impact on the environment. But the good news is, 30% of the 7 gigawatts that we use today is actually, and this is a funny anecdote, is actually being used to predict the weather. So if we can actually, and it’s using very old technologies and machine learning technologies in order to predict the weather, just to predict 7 days of weather. Now we can actually use generative AI to predict not just 7 days, 12 days in much less power and reutilize that excess power for new uses of AI. Now in terms of society, yes, AI is going to have a profound impact on jobs. Jobs are, again, not going to be the same before and after we fully adopt AI. But we as a global society, we need to come back again, upskill and reskill our economies in order to adopt, adapt, and prosper together with AI. Finally, in terms of policy, this is the main topic of discussion in this session. Like every technology, there are two aspects when it comes to policy for AI. And this is true for every technology, there is a local aspect and a global aspect. In terms of the local aspect, we can look at the collection, the use, the utilization and the access of data and AI technologies within these specific geographies according to the local priorities and agendas. In the global aspect, which we’ve already done amazing work with the establishment of the report, we actually need to work as global bodies with local government, with the private sector and the public sector. And the good news is everyone is willing to actually put their hands together and leverage whatever we have today for the good of humanity and to ease the adoption of AI. And with that, I look forward to the rest of the session and the discussion. Thank you so much.
Sorina Teleanu: For the good of humanity is a very good way to end this section of the discussion. I did promise we’ll have a dialogue and we only have 19 more minutes of this session. So I’m going to try to do that. I look in the room and I also count on my colleague online to tell us what’s happening there. Any hands, anyone would like to… Do you have a mic there or how does it work? Okay, I’ll come to you. Probably easier. Let’s try. Please also introduce yourself. Okay.
Audience: So thank you so much. Give me the opportunity to ask some questions. So at first, I’d like to congratulate your hard work to release this report. I know that’s very hard work to address such complicated issues. And I’m eager to read this report. But back to the main theme, AI governance we want, I want to ask a fundamental question about what is the overarch goal for the AI governance? Is it acceptable to use the title of the very first United Nations resolution adopted by the General Assembly in March, the title is Seizing the Opportunities of Safe, Secure, Trustworthy AI Systems for Sustainable Development? If not, what is the better articulation for the overarch goal of AI governance? So that’s my first question. And my second question is that I believe governance is beyond regulation. Governance dealing with technical innovations, because we do need technical information, but we also need governance to guide these innovations for the great good of the people and the globe, the planet. So if we use that for sustainable development is the overarch goal of the AI governance, how can we guide the AI innovation in line with the sustainable development goals, and even to accelerate the implementation of sustainable development goals? And the third question is back to the regulation. The common concern for AI application is that about disinformation. But the disinformation is from the mild use of using AI tools. So take traffic safety as example. For traffic safety we need safe cars, we need safe roads, but more importantly we need people, the drivers, obeyed by the rules. So how can we have a comprehensive governance framework to regulate the behavior of the AI users? I stop here. Thank you so much.
Sorina Teleanu: Thank you also. I think we’ll try to get a few more questions and then provide reflections. Any more points from the room? I don’t see any hand. And we covered quite many topics, so I’m pretty sure you do have at least a small reflection in mind, thinking about all of them. I’m seeing a hand there. Could you please come? There are only mics here, unfortunately. Meanwhile, I do like your question. What do we want from AI governance and what is the AI governance we want? And while we’re waiting, Ximena, you want to provide some reflections?
Brando Benifei: Yes. So obviously there have been around four important resolutions this year regarding AI. One was promoted by China, the one you mentioned, which is fantastic. So all of these resolutions are steps forward. And they are also leading up to the global governance that we want and that we expect. That’s why we had the Summit of the Future this September, and we had the Global Digital Compact, and we had the Pact for the Future. And all of the documents that were adopted therein are a monumental step because we are now guiding the path of where AI is going to be governed by and for humanity. So that’s actually the title of the Secretary General’s High Advisory Body report, Governing AI for Humanity. And I’d like to say also for the benefit and protection of humanity, because as you mentioned, AI has enormous potential and can be harnessed for good, can be harnessed for all types of enhancement of all of the sustainable goals. However, as Amina Mohammed said in the Arab Forum this past March in Beirut, there can be no sustainable development without peace. So going back to the point of peace and security and the importance of AI and the dual-use nature of it, what we want to create is global governance that encompasses both of this dual-use nature, repurposability, all of that. And it’s important to have it because, again, if not, we’re just going to have fragmented approaches that are not interoperable, that are not correlated or cooperated. So we all need to work towards this. And the only way we can do it is by adoption of a binding treaty. So that’s going to be hard, but we need to be ambitious in order to have this technology governed by us, not that we eventually get surpassed by it.
Audience: Hi, my name is Ansgar Kuhne, I’m with EUI. Of course, I’d like to congratulate the Policy Network on AI for this very important report. I’d like to invite the panel to reflect on the interaction between the liability and interoperability aspect, specifically if we have interoperating AI systems, how best to identify where liability would lie in case issues do arise. Is there a role for contractual agreements in this? And if so, how to deal with the imbalances in both informational and economic power that various actors within that network of interoperating players may have? Thank you.
Sorina Teleanu: Thank you as well. Do we have more hands in the room? Yes, we do. Try the mic over there. If not, I’ll come your way. Not so much. Okay, it’s going to take me a while. If anyone would like to provide any reflection while I do the walk, please go ahead. Thank you.
Audience: Thank you. Riyad Najm. I’m from the media and communications sector. Now, in order for us to govern something, don’t we have to define it first? I mean, we all talk about artificial intelligence and what it is and what it can do good or bad to us. But until now, I cannot see a correct and definite definition for AI. By this definition, does it mean the speed that we can execute our computation or is it the access of data that we can reach and manipulate at the same time? By doing that, we all know that artificial intelligence was established a long time ago. The only thing that is becoming relevant now is because we are able to access data at the same time with a great amount and we have excessive and high speed of computation. So we need to define it first before we try to govern it. And maybe my other comment, if for the past maybe almost 20 years, we have not been able to govern the internet itself on a global level. All what we get are like sometimes guidelines, some initiatives and so on. And there was never a treaty that can cover this. Are we able to do that for artificial intelligence? I leave that to the panel to answer. Thank you.
Sorina Teleanu: Thank you as well for taking us many, many steps back and asking the questions of what exactly do we talk about when we talk about AI. I’m going to turn online and see if we have any questions and reflections from there briefly. From our online participants, not our online speakers. If our online moderator can just go ahead and unmute.
Audience: Okay, thank you very much. But actually we have a problem with our audio. So I’m not quite sure whether you can hear us well or not. Please go ahead, we can hear you well. Okay, there are a few questions actually from the online audience. The first one, I think this is a repetition of the last question, more or less the same. So how do we address the regulatory arbitrage between the countries, especially between the global north and the global south? Because the situation is very difficult and very different. Even for the internet, for the past 20 years, it is hard for us to regulate it, right? So how do we arbitrage this issue? And the second question is, we have a problem of wrong usage of AI. So the wrong usage of AI is worse in the case of military applications. So how do we safeguard ourselves to ensure that our AI is not being wrongly used for military purposes? And then we have an interesting question from Omar. The third question is, okay, AI is producing a lot of harms actually in terms of the online interactions. So there are several bullying cases, there are a lot of things that are being generated falsely by generative AI. So how do we protect ourselves, especially for the young generations? So any panelists that hopefully can answer these questions? And then maybe how to foster the collaborations when we encounter this problem? So I think that’s four questions is enough because we have like seven minutes more. Thank you very much.
Sorina Teleanu: Thank you also, Mohd, sorry, and to everyone else online and here. We have seven minutes to answer quite many questions and I’m going to turn to you. Please go ahead.
Brando Benifei: Yes, well, a lot of different things have been asked, I tried to answer a few. In fact, on the issue of the definition that was touched, it was a big issue for us too. In the end, I think it’s very important that we concentrate as much as possible on defining the concrete applications of AI so that we define the systems, we define what we want to regulate for the regulation sake, because we are not talking about philosophy or other sciences that should analyze AI in different aspects. So we have been working on that as EU and there are important processes ongoing at UN, at OECD, and I think we need to stick to the minimum that we can so that we can find more agreement. Otherwise, we will lose sight. On the sustainable goals issue that was mentioned, I think it’s important to also mention the risk of excessive wealth concentration that will limit the access to services, to the same issues we mentioned of permanent learning, etc. So in fact there is also an issue of how we distribute the added value created by AI increase of productivity. It’s something that if we look at the policy side, it cannot be avoided. I think we need to bring that on the table too. So it’s a fiscal policy, it’s a budgetary policy, it’s a new welfare system because in fact with the revolutions we talked about, with the industrial revolution, electricity, the digital space, we have seen changes in how we organized our safety nets and our state support systems. So we need to work also on that. And finally on the liability, I want to say that it’s very important that we work on finding more transparency. This is what we have been working on with the AI Act, because if there is no downstream transparency between the various operators in the chain of value of AI, then the risk of asymmetry and the transportation of responsibilities down the stream will be damaging to the weaker actors. It will strengthen the incumbents and not have a healthy market for AI. So liability, transparency, yes we can have contractual agreements. But only if we have strong safeguards to avoid the lack of information. Otherwise we will just entrench market advantages and we will, I think, suppress innovation. So we need to find a good way. In Europe we are now working on a new liability legislation, AI liability legislation that complements the AI Act and we will be discussing this for sure also in the future in this kind of context. Thank you very much.
Sorina Teleanu: Thank you also, Brando, for highlighting the need for actually whole-of-government and whole-of-society approaches to dealing with the challenges of AI. Any more reflections from our speakers? We have three more minutes.
Muta Asguni: So, I think there were a lot of questions regarding regulation, regarding governance, regarding the definition of AI. I just want to sort of take a step back and highlight an amazing approach that has been taken in the report. In regards of forms of regulation, I think there are a lot of different forms of regulation that has been taken in the report in regards of focusing on the value chain of AI. Because you cannot govern the whole of AI together. You need to actually distribute it into components and look at each component in isolation from the other components. And I also want to mention that we still don’t fully understand AI, right? This is the first technology, maybe not the first, we still don’t understand electricity for example fully, but this is a technology that is giving us answers in a way that is not very transparent. We don’t know why did the model give us that answer. So, I think a change on how we regulate and how we govern such a technology is very much needed. We cannot take a reactive approach, especially when it comes to liability. We also need to serve a proactive approach in the appropriate components within the value chain. So, in the data layer for example, the collection of data, we’re going to take a reactive approach. But in the access to AI for example, maybe we need to consider a more proactive approach when it comes to governance and regulation. And from that, I want to also talk a little bit about interoperability. Because, you know, one of the biggest questions that we get from investors when it comes to investment in Saudi for example, is if I’m compliant with the laws and regulations in country X, am I going to be compliant with the laws and regulations in your country? And this is a very important question. And it’s a very important question, especially when it comes to the GDPR and the differences between the GDPR and the BDPL in KSA. So, having frameworks and interoperability, especially when it comes to data, because data currently is kind of clear, and we can move from data upstream into the value chain of AI from there. Thank you.
Meena Lysko: Thank you. Thank you very much. Perhaps just from my side, I’d like to again just emphasize that in order for us to have a future world and an equal future, sincere and responsible collaboration is crucial. And we need to prioritize these in sustainability design, like putting the report, deployment and governance of generative AI technologies. And maybe a last point, without an environment, there’s no point on collaboration to boost economies or on developing societies. We need to move off from our path of total global destruction. Thank you.
Sorina Teleanu: Thank you also. We have quite a few powerful messages out of these sessions. I hope someone will be taking due notes. If not, we have AI enabled reporting. Jimena?
Jimena Viveros: I suggest very quickly, I just wanted to say that I think now that even though there’s no one single definition to AI, we’re getting, I mean, the technology has been here for over 70 years. So, I mean, we have some understanding of it. And what we’re trying to do now is to whiten the black box in terms of explainable AI and so on. So, all of these things, like we’re trying to do forensics, for example, on the models to see like how they came up with the outputs and so on. So, this is a very important thing that’s going to help us make AI more accountable. And the global governance framework, I think it should be overarching of all of the topics. Obviously, there’s going to be a lot of subset regimes, but they should all be dependent on like the umbrella of governance. And just to finish on the liability, I think the one conclusion we can come to is that if you cannot fully control the effects of a technology, you should accept by the mere fact that you’re using it, that you will be responsible for whatever happens. That will happen in this case. So, I think that should be the general rule that we should keep in mind for now, especially when it comes to the peace and security domain or when there’s human right violations involved. So, high scale or high risk frontier models and all of the other type of decision support systems and autonomous weapon systems. Thank you.
Sorina Teleanu: Thank you. Yves, Anita, any final reflections from you before we wrap up?
Yves Iradukunda : Just to, again, agree with the comment around the liability. I think it goes back also to the emphasis that has been done on awareness and capacity building, because some of the liability may come from the most vulnerable link within our ecosystem. So, that then means that we need to emphasize on partnership, because if that sort of responsible use of some of this method is applied in any one of the jurisdiction, it will not leave the rest of the countries or organizations safe. So, I think, again, an emphasis on building the partnerships that enforce collaboration and partnership to advance some of these values that have been discussed.
Sorina Teleanu: Thank you. Amrita, if you’re still with us and would like to add something? Okay, perhaps not. We are out of time. I’m not even going to try to summarize the many points that have been touched on today, but I’m sure there will be a very comprehensive report by the Policy Network facilitators, and there will also be one from, as I was saying, AI-enabled. I do, again, encourage everyone to take a look at the report, maybe even only the recommendations. There is a chatbot that will allow you to interact with it directly. Looking forward to seeing how the Policy Network will continue its work building on some of the very, very useful and thought-provoking reflections from today. Many thanks to our speakers here and online. Many thanks to you in the room also for your contributions, and our online participants also. Enjoy the rest of the IGF, and let’s see where we get with this AI, humanity, governance, society, and all the implications around them. Thank you so much.
Jimena Viveros
Speech speed
147 words per minute
Speech length
842 words
Speech time
342 seconds
Need for global governance framework for AI
Explanation
Jimena Viveros argues for the necessity of a global governance framework for AI. She emphasizes that this framework should be overarching and encompass all topics related to AI governance.
Evidence
She mentions that there will be subset regimes, but they should all fall under the umbrella of global governance.
Major Discussion Point
AI Governance and Regulation
Agreed with
Brando Benifei
Anita Gurumurthy
Agreed on
Need for global governance framework for AI
Importance of state responsibility for AI systems
Explanation
Jimena Viveros emphasizes the need for state responsibility in the production, deployment, and use of AI systems throughout their entire lifecycle. She argues that this is crucial for rebuilding trust in the international system.
Major Discussion Point
Liability and Accountability for AI
Agreed with
Brando Benifei
Anita Gurumurthy
Agreed on
Importance of addressing liability and accountability in AI systems
Differed with
Anita Gurumurthy
Differed on
Scope of liability for AI systems
Challenge of allocating responsibility given opacity of AI systems
Explanation
Viveros highlights the difficulty in allocating responsibility due to the opacity of AI systems. She points out that the ‘black box’ nature of AI makes it challenging to determine how decisions are made.
Evidence
She mentions ongoing efforts to develop explainable AI and forensic techniques to understand how AI models produce their outputs.
Major Discussion Point
Liability and Accountability for AI
Brando Benifei
Speech speed
131 words per minute
Speech length
1909 words
Speech time
872 seconds
Importance of domestic policies alongside global governance
Explanation
Brando Benifei emphasizes the need for both domestic policies and global governance for AI. He suggests that while global cooperation is necessary, countries also need to develop their own rules for dealing with AI in their societies.
Major Discussion Point
AI Governance and Regulation
Agreed with
Jimena Viveros
Anita Gurumurthy
Agreed on
Need for global governance framework for AI
Need to focus on concrete AI applications in regulation
Explanation
Benifei argues for focusing on defining and regulating concrete applications of AI rather than getting bogged down in philosophical definitions. He suggests this approach is more practical for regulatory purposes.
Evidence
He mentions that the EU has been working on this approach, and there are ongoing processes at the UN and OECD.
Major Discussion Point
AI Governance and Regulation
Differed with
Muta Asguni
Differed on
Approach to AI regulation
Need for transparency to address liability issues in AI value chain
Explanation
Benifei emphasizes the importance of transparency in the AI value chain to address liability issues. He argues that without downstream transparency, there is a risk of asymmetry and unfair distribution of responsibilities.
Evidence
He mentions that the EU is working on new AI liability legislation to complement the AI Act.
Major Discussion Point
Liability and Accountability for AI
Agreed with
Jimena Viveros
Anita Gurumurthy
Agreed on
Importance of addressing liability and accountability in AI systems
Need to consider AI’s impact on labor markets and upskilling
Explanation
Benifei emphasizes the need to consider AI’s impact on labor markets and the importance of upskilling. He argues that AI will significantly change the job market, requiring adaptation and new skills.
Evidence
He compares the impact of AI to previous industrial revolutions, suggesting it could be as transformative as the introduction of electricity.
Major Discussion Point
Environmental and Social Impacts of AI
Need for common standards and definitions for AI globally
Explanation
Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests this is crucial for enabling interoperability and cooperation between different parts of the world.
Evidence
He mentions ongoing work in various international fora to develop these common standards.
Major Discussion Point
AI and Global Cooperation
Agreed with
Meena Lysko
Yves Iradukunda
Agreed on
Need for collaboration and partnerships in AI development and governance
Anita Gurumurthy
Speech speed
140 words per minute
Speech length
752 words
Speech time
320 seconds
Challenges of regulating AI given its transboundary nature
Explanation
Anita Gurumurthy highlights the difficulties in regulating AI due to its cross-border nature. She points out that the opacity of algorithms in cross-border value chains, combined with trade secret protections, can hinder effective regulation.
Evidence
She cites a recent case involving Lyft and Uber where the Washington Supreme Court ruled that reports maintained as trade secrets should be made public in the public interest.
Major Discussion Point
AI Governance and Regulation
Agreed with
Jimena Viveros
Brando Benifei
Agreed on
Need for global governance framework for AI
Need for liability rules for both producers and operators of AI systems
Explanation
Gurumurthy argues for the necessity of liability rules that apply to both producers and operators of AI systems. She emphasizes that a high level of care is needed in designing, testing, and employing AI-based solutions.
Evidence
She provides examples of social welfare systems and government employment of AI systems to illustrate the importance of operator liability.
Major Discussion Point
Liability and Accountability for AI
Agreed with
Jimena Viveros
Brando Benifei
Agreed on
Importance of addressing liability and accountability in AI systems
Differed with
Jimena Viveros
Differed on
Scope of liability for AI systems
Muta Asguni
Speech speed
137 words per minute
Speech length
1135 words
Speech time
495 seconds
Importance of proactive and reactive approaches to AI governance
Explanation
Muta Asguni argues for a combination of proactive and reactive approaches to AI governance. He suggests that different components of the AI value chain may require different regulatory approaches.
Evidence
He gives examples of taking a reactive approach to data collection and a more proactive approach to AI access.
Major Discussion Point
AI Governance and Regulation
Differed with
Brando Benifei
Differed on
Approach to AI regulation
Potential of AI to support sustainable development goals
Explanation
Asguni highlights the potential of AI to contribute to sustainable development goals. He suggests that AI can be a tool to improve citizens’ lives in areas such as healthcare, education, and agriculture.
Major Discussion Point
Environmental and Social Impacts of AI
Challenge of regulatory arbitrage between countries
Explanation
Asguni highlights the challenge of regulatory arbitrage between countries in AI governance. He points out that differences in regulations between countries can create complications for businesses and investors.
Evidence
He gives an example of investors asking about compliance with regulations in different countries, specifically mentioning differences between GDPR and BDPL in Saudi Arabia.
Major Discussion Point
AI and Global Cooperation
Meena Lysko
Speech speed
124 words per minute
Speech length
968 words
Speech time
467 seconds
Environmental impacts of AI infrastructure and resource extraction
Explanation
Meena Lysko highlights the significant environmental impacts of AI infrastructure and resource extraction. She emphasizes the need to consider these impacts in the development and deployment of AI technologies.
Evidence
She provides detailed examples of environmental damage from mining activities for materials used in AI hardware, such as cobalt mining in the Democratic Republic of Congo and lithium mining in Chile’s Atacama Desert.
Major Discussion Point
Environmental and Social Impacts of AI
Need for sincere collaboration on responsible AI development
Explanation
Lysko emphasizes the importance of sincere and responsible collaboration in the development and governance of AI technologies. She argues that this is crucial for creating an equal future and addressing global challenges.
Major Discussion Point
AI and Global Cooperation
Agreed with
Yves Iradukunda
Brando Benifei
Agreed on
Need for collaboration and partnerships in AI development and governance
Yves Iradukunda
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Importance of addressing inequalities in AI adoption and access
Explanation
Yves Iradukunda highlights the need to address inequalities in AI adoption and access. He emphasizes the importance of building capacity and awareness to ensure equitable development and use of AI technologies.
Major Discussion Point
Environmental and Social Impacts of AI
Importance of partnerships to bridge divides in AI development
Explanation
Iradukunda stresses the importance of partnerships in bridging divides in AI development. He argues that collaboration is essential to enforce responsible use of AI across different jurisdictions.
Major Discussion Point
AI and Global Cooperation
Agreed with
Meena Lysko
Brando Benifei
Agreed on
Need for collaboration and partnerships in AI development and governance
Agreements
Agreement Points
Need for global governance framework for AI
Jimena Viveros
Brando Benifei
Anita Gurumurthy
Need for global governance framework for AI
Importance of domestic policies alongside global governance
Challenges of regulating AI given its transboundary nature
The speakers agree on the necessity of a comprehensive global governance framework for AI, while acknowledging the need for domestic policies and the challenges posed by AI’s transboundary nature.
Importance of addressing liability and accountability in AI systems
Jimena Viveros
Brando Benifei
Anita Gurumurthy
Importance of state responsibility for AI systems
Need for transparency to address liability issues in AI value chain
Need for liability rules for both producers and operators of AI systems
The speakers emphasize the importance of establishing clear liability and accountability mechanisms for AI systems, including state responsibility and transparency in the AI value chain.
Need for collaboration and partnerships in AI development and governance
Meena Lysko
Yves Iradukunda
Brando Benifei
Need for sincere collaboration on responsible AI development
Importance of partnerships to bridge divides in AI development
Need for common standards and definitions for AI globally
The speakers agree on the importance of collaboration, partnerships, and common standards in AI development and governance to address global challenges and bridge divides.
Similar Viewpoints
Both speakers emphasize the need for practical approaches to AI governance, focusing on specific applications and combining proactive and reactive regulatory strategies.
Muta Asguni
Brando Benifei
Importance of proactive and reactive approaches to AI governance
Need to focus on concrete AI applications in regulation
Both speakers highlight the environmental and developmental aspects of AI, recognizing its potential for sustainable development while also acknowledging its environmental impacts.
Meena Lysko
Muta Asguni
Environmental impacts of AI infrastructure and resource extraction
Potential of AI to support sustainable development goals
Unexpected Consensus
Importance of addressing inequalities in AI adoption and access
Yves Iradukunda
Anita Gurumurthy
Brando Benifei
Importance of addressing inequalities in AI adoption and access
Challenges of regulating AI given its transboundary nature
Need to consider AI’s impact on labor markets and upskilling
Despite representing different regions and perspectives, these speakers unexpectedly converged on the importance of addressing inequalities in AI adoption, access, and its impact on labor markets, highlighting a shared concern for equitable AI development.
Overall Assessment
Summary
The main areas of agreement include the need for a global governance framework for AI, the importance of addressing liability and accountability, the necessity of collaboration and partnerships in AI development, and the recognition of AI’s environmental and social impacts.
Consensus level
There is a moderate to high level of consensus among the speakers on the key issues surrounding AI governance. This consensus suggests a growing recognition of the complex challenges posed by AI and the need for coordinated global action. However, differences in emphasis and approach indicate that achieving a unified global framework for AI governance may still face significant challenges.
Differences
Different Viewpoints
Approach to AI regulation
Brando Benifei
Muta Asguni
Need to focus on concrete AI applications in regulation
Importance of proactive and reactive approaches to AI governance
Benifei advocates for focusing on concrete AI applications in regulation, while Asguni suggests a combination of proactive and reactive approaches depending on the component of the AI value chain.
Scope of liability for AI systems
Anita Gurumurthy
Jimena Viveros
Need for liability rules for both producers and operators of AI systems
Importance of state responsibility for AI systems
Gurumurthy emphasizes liability for both producers and operators of AI systems, while Viveros focuses more on state responsibility throughout the AI lifecycle.
Unexpected Differences
Emphasis on different aspects of AI governance
Jimena Viveros
Yves Iradukunda
Importance of state responsibility for AI systems
Importance of partnerships to bridge divides in AI development
While both speakers discuss AI governance, their focus is unexpectedly different. Viveros emphasizes state responsibility, while Iradukunda stresses the importance of partnerships and collaboration. This highlights the complexity of AI governance and the various approaches that can be taken.
Overall Assessment
summary
The main areas of disagreement revolve around the approach to AI regulation, the scope of liability for AI systems, and the balance between global governance and domestic policies.
difference_level
The level of disagreement among the speakers is moderate. While there are differences in emphasis and approach, there is a general consensus on the need for AI governance and regulation. These differences reflect the complexity of AI governance and the various perspectives that need to be considered in developing effective policies and frameworks. The implications of these disagreements suggest that a multifaceted approach to AI governance may be necessary, incorporating elements from various viewpoints to create a comprehensive and effective regulatory framework.
Partial Agreements
Partial Agreements
Both speakers agree on the need for global governance of AI, but they differ in their emphasis. Benifei stresses the importance of domestic policies alongside global governance, while Asguni highlights the challenges of regulatory arbitrage between countries.
Brando Benifei
Muta Asguni
Importance of domestic policies alongside global governance
Challenge of regulatory arbitrage between countries
Both speakers address the environmental and developmental aspects of AI, but from different angles. Lysko focuses on the negative environmental impacts of AI infrastructure, while Asguni emphasizes the potential of AI to support sustainable development goals.
Meena Lysko
Muta Asguni
Environmental impacts of AI infrastructure and resource extraction
Potential of AI to support sustainable development goals
Similar Viewpoints
Both speakers emphasize the need for practical approaches to AI governance, focusing on specific applications and combining proactive and reactive regulatory strategies.
Muta Asguni
Brando Benifei
Importance of proactive and reactive approaches to AI governance
Need to focus on concrete AI applications in regulation
Both speakers highlight the environmental and developmental aspects of AI, recognizing its potential for sustainable development while also acknowledging its environmental impacts.
Meena Lysko
Muta Asguni
Environmental impacts of AI infrastructure and resource extraction
Potential of AI to support sustainable development goals
Takeaways
Key Takeaways
There is a need for global governance and regulation of AI, balanced with domestic policies
Liability and accountability frameworks for AI need to address both producers and operators
The environmental and social impacts of AI, including on labor markets and inequality, must be considered
Global cooperation, common standards, and partnerships are crucial for responsible AI development
AI governance should aim to harness AI’s potential for sustainable development while mitigating risks
Resolutions and Action Items
Continue work of the Policy Network on AI to build on insights from this discussion
Encourage stakeholders to review the full Policy Network on AI report and its recommendations
Explore use of the AI chatbot created to allow interaction with the report’s contents
Unresolved Issues
How to achieve a binding global treaty or governance framework for AI
How to balance proactive and reactive approaches to AI regulation
How to address regulatory arbitrage between countries, especially Global North and South
How to define AI in a way that allows for effective governance
How to ensure transparency and explainability of AI systems for accountability purposes
How to protect against misuse of AI, especially in military applications
Suggested Compromises
Focus on regulating concrete AI applications rather than trying to define AI as a whole
Adopt a value chain approach to AI governance, addressing different components separately
Balance global governance frameworks with flexibility for domestic implementation
Combine proactive and reactive regulatory approaches for different aspects of AI
Thought Provoking Comments
Digital technology must serve humanity, not the other way around.
speaker
UN Secretary General (quoted by Sorina Teleanu)
reason
This concise statement cuts to the heart of the ethical considerations around AI governance, framing the discussion in terms of human-centric values.
impact
It refocused the conversation on the fundamental purpose and ethics of AI development, beyond just technical or policy considerations.
We need to look at this very carefully. The other thing I want to say is that the recommendations in the environment section could also look at very useful concepts coming from international environmental law, you know, the Biodiversity Convention, common but differentiated responsibilities because the financing that is needed for AI infrastructures will require us to adopt a gradient approach.
speaker
Anita Gurumurthy
reason
This comment introduced important environmental and legal perspectives that had not been previously discussed, highlighting the need for a nuanced, global approach.
impact
It broadened the scope of the discussion to include environmental concerns and international legal frameworks, leading to more holistic consideration of AI governance.
We can’t talk about responsible AI as AI isolated from everything else that we do in our lives. I think when you think about AI as a technology, we also need to reflect about why AI to begin with, why technology to begin with, and what has been the impact of technology all along, before even the AI came in.
speaker
Yves Iradukunda
reason
This comment challenged participants to consider AI in a broader historical and societal context, rather than as an isolated phenomenon.
impact
It shifted the discussion towards a more holistic view of AI’s role in society and its relationship to other technologies and social issues.
Today, we’re using about 7 gigawatts of electrical power in data centers in the world today. This is projected to grow to 63 gigawatts by 2030. In just five years, we’re expected to grow and consume 10 times the electricity that we consume today for the use of data centers.
speaker
Muta Asguni
reason
This comment provided concrete data on the environmental impact of AI, bringing a tangible dimension to the discussion of sustainability.
impact
It grounded the conversation in real-world implications and highlighted the urgency of addressing AI’s environmental impact.
Now in order for us to govern something, don’t we have to define it first? I mean, we all talk about artificial intelligence and what it is and what it can do good or bad to us. But until now, I cannot see a correct and definite definition for AI.
speaker
Riyad Najm (audience member)
reason
This question challenged a fundamental assumption of the discussion, pointing out the lack of a clear, agreed-upon definition of AI.
impact
It prompted speakers to address the challenge of defining AI for governance purposes, leading to a more nuanced discussion of how to approach regulation.
Overall Assessment
These key comments shaped the discussion by broadening its scope beyond technical and policy considerations to include ethical, environmental, historical, and definitional challenges. They pushed participants to consider AI governance in a more holistic, global context, while also highlighting the urgency of addressing concrete impacts. The discussion evolved from specific policy recommendations to grappling with fundamental questions about the nature of AI and its role in society.
Follow-up Questions
How feasible is a global governance regime for AI?
speaker
Sorina Teleanu
explanation
This is important to explore concrete steps for establishing international cooperation on AI governance, given current challenges with multilateralism.
What is the overarching goal for AI governance?
speaker
Audience member
explanation
Defining a clear goal is crucial for aligning global efforts on AI governance and guiding policy development.
How can we guide AI innovation to accelerate implementation of sustainable development goals?
speaker
Audience member
explanation
This explores how to harness AI’s potential for addressing global challenges while mitigating risks.
How can we develop a comprehensive governance framework to regulate the behavior of AI users?
speaker
Audience member
explanation
This addresses the need to govern not just AI systems, but also how humans interact with and use AI technologies.
How to best identify where liability would lie in case issues arise with interoperating AI systems?
speaker
Ansgar Kuhne
explanation
This explores the complex legal challenges of assigning responsibility in interconnected AI systems.
What is the correct and definite definition of AI?
speaker
Riyad Najm
explanation
A clear definition is necessary to properly scope and implement AI governance efforts.
How do we address the regulatory arbitrage between countries, especially between the global north and south?
speaker
Online audience member
explanation
This explores how to create equitable AI governance given different national contexts and capabilities.
How do we safeguard against the wrong usage of AI for military purposes?
speaker
Online audience member
explanation
This addresses critical concerns about AI’s dual-use nature and potential misuse in warfare.
How do we protect ourselves, especially young generations, from harms produced by AI in online interactions?
speaker
Online audience member (Omar)
explanation
This explores safeguards needed to protect vulnerable groups from AI-enabled online harms.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
AI Assistant for the PNAI report
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online