WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World

16 Dec 2024 14:00h - 15:00h

WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World

Session at a Glance

Summary

This discussion focused on AI regulation and governance, particularly exploring the global impact of the European Union’s AI Act. Panelists from various backgrounds discussed the potential for the EU AI Act to become a de facto global standard, with mixed opinions on its likelihood. Key challenges for global AI standardization were identified, including scoping issues, achieving consensus, and translating fundamental rights into technical standards.

The role of civil society in AI governance was emphasized, with participants highlighting the importance of monitoring governments, advocacy, and facilitating dialogue. The discussion also addressed the unique challenges faced by developing nations in leveraging AI while upholding human rights. These challenges include digital divides, lack of quality data, and capacity issues in both technological implementation and policy-making.

Panelists explored the differences between internet and AI standardization, noting that AI was not built on standards from the outset like the internet was. The potential for big tech companies to resist EU regulations was discussed, with the EU’s stance being that responsible AI development is non-negotiable for market access.

The discussion concluded by addressing concerns about AI’s broad impact across various fields, including healthcare and neurotechnology. Participants stressed the need for ongoing monitoring, impact assessments, and civil society engagement to ensure responsible AI development and use. Overall, the session highlighted the complex challenges in creating effective global AI governance while balancing innovation, regulation, and human rights considerations.

Keypoints

Major discussion points:

– The potential global impact of the EU AI Act and whether it will become a de facto global standard

– Challenges for international standardization of AI, including scoping, finding experts, and achieving consensus

– The role of civil society in AI governance and enforcement

– AI development and regulation in the Global South, including capacity building needs

– Balancing innovation and regulation/control of AI technologies

Overall purpose:

The purpose of this discussion was to explore AI regulation and policymaking from a global perspective, gathering views from different stakeholders on key issues related to AI governance, standardization, and implementation.

Tone:

The overall tone was informative and collaborative. Panelists shared their expert perspectives in a constructive manner, while also encouraging audience participation through polls and questions. The tone remained consistent throughout, with speakers building on each other’s points and addressing audience questions thoughtfully.

Speakers

– Auke Pals: KPMG

– Lisa Vermeer: Ministry of Economic Affairs in Netherlands, implements European AI Act in Netherlands

– Ananda Gautam: Open Internet Nepal

– Juliana Sakai: Executive director of Transparency Brazil

Additional speakers:

– Wouter Cobus: With the platform Internet Standards

– Karen (no surname available): No specific role/title mentioned

Full session report

AI Regulation and Governance: A Global Perspective

This discussion explored the complex landscape of AI regulation and governance from a global perspective, bringing together experts from various backgrounds to address key issues in AI policymaking, standardisation, and implementation. The session included interactive voting elements and was constrained by time limitations.

EU AI Act and Its Global Impact

A central focus of the discussion was the potential global impact of the European Union’s AI Act. Lisa Vermeer, from the Ministry of Economic Affairs in the Netherlands, presented arguments both for and against the Act becoming a de facto global standard for AI governance. While the Act’s comprehensive approach could influence AI development worldwide, Vermeer noted that it might not be suitable for direct replication in other regions due to differing regulatory contexts.

Ananda Gautam, representing civil society from Nepal, highlighted the Act’s potential influence through extraterritorial jurisdiction, particularly on developing nations. This perspective underscored the far-reaching implications of EU regulations beyond its borders.

Auke Pals, from KPMG, raised concerns about potential resistance from big tech companies in complying with EU AI Act requirements. This point introduced the complex dynamics between regulators and industry players in shaping the future of AI governance.

Challenges in Global AI Standardisation

The discussion revealed significant challenges in achieving global AI standardisation. Pals pointed out issues such as fragmentation and overlap between different standardisation bodies, as well as the difficulty in balancing regulation, standardisation, and innovation. The rapidly evolving nature of AI technology was identified as a major obstacle to effective standardisation. A suggestion was made to learn from internet standardization processes in developing AI standards.

Gautam brought attention to the lack of capacity in developing nations to implement or create AI standards, highlighting a crucial gap in global AI governance. This perspective emphasised the need for inclusive approaches that consider the diverse contexts and capabilities of different nations.

Role of Civil Society in AI Governance

The importance of civil society in AI governance emerged as a key theme, with strong consensus among speakers. Juliana Sakai, from Transparency Brazil, shared insights from the Brazilian experience, highlighting how existing legal frameworks can be leveraged to challenge AI implementations. She emphasized the role of civil society in:

1. Monitoring government use of AI systems

2. Advocating for transparency and accountability in AI implementation

3. Facilitating dialogue between stakeholders on AI governance

Gautam added the importance of capacity building and raising awareness about AI impacts, particularly in developing nations. This multifaceted role of civil society was seen as essential for ensuring responsible AI development and use globally.

AI Challenges and Opportunities for Developing Nations

The discussion highlighted both challenges and opportunities for developing nations in the context of AI. Gautam elaborated on issues such as the digital divide, language barriers, and lack of quality data and technological capacity, which could hinder AI adoption and development. However, he also emphasised the potential to leverage AI in addressing development challenges, particularly in education and healthcare sectors.

The need for frameworks to ensure AI upholds human rights globally was stressed, with particular emphasis on accommodating the needs of developing nations in global AI governance structures. This perspective underscored the importance of inclusive approaches to AI regulation and development.

AI in Military Contexts

A significant point raised during the Q&A session was the potential use of AI in military contexts. The discussion touched on concerns about Gaza being used as a testing ground for AI in warfare. This highlighted the critical need for ethical considerations and international regulations regarding the use of AI in military and security domains.

Ethical Considerations and Societal Impact

The discussion also touched on deeper ethical considerations and the broader societal impact of AI. Concerns were raised about AI’s potential to replicate and amplify human biases, particularly in sensitive areas like healthcare. This broadened the conversation beyond regulatory frameworks to include the ethical implications of AI’s increasing role in society.

Conclusion

The discussion provided valuable insights into the current state of AI governance globally, highlighting the complex interplay between regulation, innovation, and ethical considerations. While many questions remain open-ended, the session underscored the need for ongoing dialogue, collaborative approaches, and flexible governance frameworks. These frameworks must be able to adapt to the rapidly evolving AI landscape while addressing fundamental concerns about fairness, transparency, and human rights across diverse global contexts.

Session Transcript

Auke Pals: are joining this session. My name is Auka, Auka Pauls. I work for KPMG. We’re here today in the AI regulation unveiled session. What we’re trying to do in this session is exploring AI regulation. And also, we’re trying to interact with you as much as possible in this session so we can gather also your views on AI regulation and policymaking worldwide. I’m here not alone. I’m here, next to me is Lisa, Lisa Vermeer. Welcome. Liliana is joining us online. Welcome as well. And Ananda is here, also next to me in the room. Welcome all. Can I give you the floor, Lisa, to introduce yourself?

Lisa Vermeer: Yes, thank you so much. My name is Lisa Vermeer. I work at the Ministry of Economic Affairs in Netherlands. And one of my main jobs is to implement European AI Act in Netherlands. I need to add that to my introduction.

Ananda Gautam: Hello, everyone. My name is Ananda Gautam. I’m from Nepal. I work with Open Internet Nepal. And I belong to civil society community. I work in capacity building of young people and making internet more transparent, inclusive, and sustainable.

Auke Pals: Thank you. Liliana, can I give you the floor as well?

Juliana Sakai: Yes, sure. Can you hear me? Yes, I can hear you. So thank you so much. I am Juliana Sakai. I’m the executive director of Open Internet. of Transparency Brazil, which is an independent NGO devoting to promote more transparency and accountability under the Brazilian government. And this is also includes the government’s use of AI. So it has been monitoring and working on how the Brazilian government is deploying development and using AI and producing recommendations on this field. And parallelly, also monitoring how the AI regulation is being discussed and the Congress, right?

Auke Pals: Thank you. Thank you very much. So this is our panel for today, but we’re here today in the interactive session. So I would encourage you all to join and join the discussion once we’re there. But that’s first, I would like you to participate in the vote. So you can scan the QR code or go to kpngvote.nl and log in with the code IGF2024. So as a starter, we’d like to introduce to you the global impact of European AI regulations. And for this, I would like to give the floor to Lisa.

Lisa Vermeer: Thank you so much. Well, this policy question, I would like to start first with doing the poll online and then we can see what gets out of it. So let’s see if it works. So the question is, do you believe that the EU AIX will become the defunct? So global standard for AI governments, is it’s always claimed as it’s one of the first comprehensive AI laws in the world. There are many other laws, but the question is, will it become the global standard? So what do you think, say yes or no, or do you have actually no idea what AI is about? It’s also possible. We do have six votes in already. Don’t be afraid to just choose something. Although you might have a nuanced opinion. We’re looking at the room. I guess most of them voted right now. So let’s go to the results. Interesting, so everyone knows what the AI is. That’s a good thing to know. It’s 55% yes and 44% no. Well, slight preference for yes or no. I’m really looking forward to hearing more about your perspectives on how this would work. So for this session, I would like to share my thoughts about why it can be yes and why it can be no, and what is, in my perspective, a challenge for all of us. So if you look at the European AI Act, it’s product safety. So the idea is that all AI systems that enter the market in the EU, in the whole European Union, they are, you can assume that they are safe because the AI gets all the requirements together for risky AI, several types of risky AI, and if the AI system is in one of these categories, it has to meet certain requirements before it can be sold or before it can be used in the EU by the private sector, by the public sector, by basically everyone. So that means that for lots of AI systems, there will be requirements that make it actually safer. And then safety, you can look at it for, it’s… This, maybe you can move it out, what?

Auke Pals: You have to hold it closer.

Lisa Vermeer: Okay, thanks for the, yeah, perfect. This is better for the, I think for the audience. Thanks. So the safety of all AI systems will be improved, and that means that there are requirements for secure AI, for healthy AI, and for fundamental rights abiding AI. So the risks that may come with AI on these areas will be tackled by all AI systems before they enter the market, at least that’s the premise of the law. So that means that these systems, when they are made by European companies, big companies all across the world, if they are made for the European markets, they will be safe enough and meeting all the requirements, which may have, and is presumed to have the impact that lots of companies will build one type of AI systems to sell everywhere. Because of the EU’s requirements, they will build a system, for example, for the health sector to use in a hospital, and then they will meet the requirements for the AI, excel it in Europe, but then also other areas of the world will benefit from the fact that this AI is meeting the requirements, also when it’s sold in, for example. and both the hospital in Nepal or any other area in the world. So for a whole range of topics that is going to be the case. And that makes me expect that there’s some kind of, you can say, yes, because it will set the standards and then it will be the standard for lots of AI across the world. But there’s a whole area of risky areas in the AIX where it may be a bit difficult to say whether in the future the requirements for the AI will really be adopted. For example, if you look at critical infrastructure or of biometrics AI, they will be regulated, but there is, it’s pretty, we can expect that some companies will build multiple products. So they will make the safest products for you, but then also build other products that do not meet the same, for example, data safety requirements or other requirements. And then they sell it in the rest of the world. You see that happening a lot. And yeah, so it depends on the incentive for the company, whether they are going to make one product for the whole world or just one that is super secure for the EU. That’s why the yes is maybe a bit limited in the end. So I also say there is very much a case for no as an answer, because the EU is a very specific area in the world in terms of regulation. There’s already a lot of regulation has been adopted for the digital economy and for personal data, for example, with the GDPR and also the data act. And there’s really a very dense regulatory field which may not, it’s very different than the regulatory ecosystem in other areas of the world. For example, big areas like India or maybe Brazil, like Guyana, it’s very different. The legal context basically where the law can be adopted. And that’s why the AI design may not be suitable for other areas of the world to replicate. And also the product is making product regulation is a very old way of regulating the markets. There’s lots of product regulation in the EU, but it may not be the approach in other areas of the world why it’s not that easy to replicate. And then another challenge that I wanted to share with you is that the AIX enforcement promises to be really challenging because the AIX is very broad and it sets rules for different areas, a whole range of areas, basically touching the whole, all industries and all public areas and how do you effectively enforce a law? How do you make sure that the regulators, not policy and lawmakers like myself, but the regulators that are going to oversight, to do the oversight of the law, really are able to work with it and make sure that people and companies and organizations abide by the law. So already in the EU, this is a major challenge and it’s something that I’m really working on a lot and like wrapping my head around. And I think that is also the case in lots of other areas and worldwide, it may be quite difficult to have a law like the AI Act in other areas because how do you enforce a law which is so broad and that creates uncertainty. So yeah, I think I leave it at that.

Auke Pals: Thank you very much, Lisa. So I hope at this point, the EU AI Act could steer us in the right direction and hopefully could be adapted once we learn how to make use of it. But that’s also part of the market, I guess, and also a part of the standardization bodies that are involved in making standards for AI, which is also the bridge to the next policy question. But this policy question is about the actions for international standardization bodies. Currently, there are loads of standardization bodies trying to get involved in the AI standardization. But we also see some challenges for those global standardization bodies. And now again, a question for you all, what are the biggest challenges? for global standardization of AI. So I would like to encourage you to vote again, and the answers will be on the screen. I’ll give you some time to grab your phone or log in at the space page. So scoping indeed, so what do we consider AI? Find experts, achieve consensus, compatibility. Translate fundamental rights to technical standards. Finding common ground. Different views on the existence of human beings. AI staying or just the next blockchain hype, also interesting. Capacity of experts, local contacts, trust. Tough question. Universal concept understanding that AI replicates traits of the minds and persons integrate ethical and human rights standard language models. So we’ll move on to the next one. So really interesting. challenges and I really think those are all true to be honest and we are indeed maybe in the early stage of AI and AI standardization so currently a lot of standardization bodies are active in the field of trying to come up with standards ethical guidelines technical standards however what we also see is that those standards are quite fragmented all different kind of standardization bodies are trying to deal with with AI in their own certain way and currently we indeed see maybe it’s a I don’t think it’s a hype so I don’t believe in being at the next blockchain hype so as one of the participants you just mentioned however what I do see is that there is quite some overlap between standardization bodies and initiatives and trying to make a standard according to their best practices and that also comes to the to the second point I was trying to make and that’s the sector specific complexity so what we what we what I do see in my work is that some standards are being created are quite generic and those might be applicable for all kinds of AI use cases however some sectors do really want to get a more steering on how to how to make use of those standards so for instance in healthcare does require way different standards than the mobility industry for autonomous cars or the defense industry does also require different standards and they might even, yeah, not publicly shared. My third point is that there are really cultural and regulatory diversities. So Lisa just mentioned the EU AIX, which is applicable in the EU and for EU inhabitants. And that’s according to the EU way of thinking. And while we create ethical standards or ethical guidelines in one of the standardization bodies, that might really differ where those guidelines are created, which can contribute to a good debate, which we might be able to have in the future on a global scale as well. My fourth point is how can we balance this? So balancing between regulation, standardization and innovation. So it was mentioned in a different session that I attended today is how can we make also a balance between the regulation side of AI initiatives and innovation. So a small startup is not the first one to be looking at as industry standard, for instance. They will just create, according to their best practices and their way of working, which my regulation could hinder those initiatives. startups in being innovative. And my last point is the dynamic technology landscape. What we did see in regulation is when the EU AI Act was being developed, generative AI was not that big in the beginning, but later on in the final stages of the negotiation, it became quite big and there was an urge and a need to regulate that. That’s the same with standardization, all the standardization takes a lot of time and is actually at the end created on the basis of what works the best. And with a sector that’s being really innovative and we’re trying to take all the newest technologies available, this might be a challenge for standardization on global scale. And with this, I would like to conclude my introduction remarks and would like to give the floor to Juliana.

Juliana Sakai: Hi everyone, thank you. So we have like right now the policy question three with the theme enhancing enforcement through civil society. And I would like you to answer in your opinion, what is the most significant role civil society can play in AI governance? Please share with us your thoughts. Login, share your opinion. So it’s coming right now. Monitor the government. Don’t think in problems, think in solutions. Capacity building. Advocacy. Vote for parties that have good plans. Voice concerns. As user of public services, it’s important to have a vote with consultants, democratic control, reserve the potential impact of AI system, defending and protecting values and public interest, facilitating dialogue, participating in standardization process, monitor the government. So we are back again. Thank you for sharing so much. Make the minority heard, yeah. So I would like to, as a member of civil society, talk a little bit about the context we are now under the regulation of AI coming. But I would like to share, I would like to split two different contexts. So one is the context in which specific AI regulations are currently being implemented. implemented, like the EU, for example, or being directly affected, not like the US, like Liza mentioned. So the producers, the tech companies that are selling, so to say, AI systems to the European Union will have to comply with the EU legislation. And the other context is where it’s like Brazil, for example. We don’t really sell products to the EU, not massively, technologically. And so let’s say the global majority that is not being affected directly, at least, by the legislation, but might have experienced an indirect influence. But let’s begin where AI regulations are currently being implemented. So as a new legislation, the civil society, watching a new legislation being implemented, the civil society has a huge role in shaping its enforcement. So since identify what are the problems, where implementation is not working and why, and advocate for institutions to take measures. But in order to do all these assessments, understand what is working and why it’s not working, and then, at the end, present these problems to society and to the institutions, we have to have real transparency. So one thing is being able to assess information both. from the government and the companies to understand what is working and what is not working. And so once we secure a less of transparency, then we can, at the end, really do like the report analysis and so on and all the digital consumer. And the second environment, not as like I mentioned, where we don’t have a specific AI framework to more like sell products to the EU. And I want to share like a little bit of Brazilian experience and civil society experience so far on AI governance without having a specific framework for this. So I think that the first thing we have to think about is really to understand what is the legal framework, the current existing legal framework that can actually protect rights in the context of AI use. And in Brazil, for example, we have consumer’s rights and the general data protection. And both of them have been used even to avoid the abuse of some, the use of some tools, especially the facial recognition systems. And civil society from this point, can present like complaints to Brazilian institutions, where, for example, the IDEC, which is the Institute for Consumer Protection in Brazil, has filed administrative. and judicial procedures against the use of this tooth in different scenarios. And for example, when the San Paolo’s subway started she using it for marketing and advertising purposes and collecting information on the reaction of people watching it. And also in the clothing stores where they were also recording and capturing information of through facial recognition systems. And in both case, we have an absolute no consent of the consumer. So under this situation, the Institute for Consumers Protections won the case both in judicial and also in administrative sphere. And so that the Metro and this clothing store stopped using. So I think this is more or less just what I wanted to try to start the conversations and make sure that even in the context where we do not have a specific in a specific AI framework, we still have a lot to work on the governance of this AI systems. And just to close it, we are currently actually discussing as I mentioned in the Congress or AI bill and the risk approach framework is also being discussed there. So this is also where at the end of the day, civil society is also fighting for protecting its right on a more specific way on the. the dangers of AI systems.

Auke Pals: Thank you very much for your great info. And yeah, we really see the importance of civil society being active in this. So it’s great that you and your organization is being part of that. And before we move on, I’d like to look at the chat as well. Okay, we’ll do that after the next speaker. And then I’ll share my screen again. And then I’m giving the floor to Ananda.

Ananda Gautam: Thank you so much for describing the role of civil society. I think Juliana has started doing that. So I’ll be discussing from the Global South perspective, maybe how we have been starting the discussion saying, will EU AI Act be a de facto regulation for AI? And I think it is the most comprehensive legislation that is existing today. And another kind of thing is it’s extraterritorial jurisdiction, which means that it can regulate AI products and services that are outside of EU as well. So in the context of developing nations and on the context of human rights, so the basic fundamental of AI system that we talk is about the quality of the data and the bias of the data. So looking from that perspective, there are two things that need to be considered. you did. One is social thing, another is technological thing. So the fundamental data, quality of data is what makes or what trains the algorithm of the AI. So in the developing nations, we strive to get the, there are many challenges, you know, we are talking about AI governance, but there still exists digital divide and we are also talking about AI divide now. So there are still, in global South, if we see there are still more than 2 billion people who don’t have access to internet itself. And then like the data standards, quality, the collection of data, I think that is very challenging. And without the standard data, it is very challenging to make the AI models. It might create bias or like it might not be as efficient as it was seemed to be. Another is technological that in the developing nations, they lack the capacity to actually either implement it or like build the models. Or like if we talk about the policy legislations, they also lack the capacity to build their own legislations. And after the EU, many countries are trying to get their legislations, but they don’t do it correctly because they don’t have that capacity. So how do we develop their capacity is one thing. And another is like, we are talking about how it can be leveraged in developing nations. It is if used correctly, maybe we can use it to close the digital divide. Maybe we can use it to empower the population that does not have as equal digital literacy as a person living in New York or like somewhere in the EU region. whether digital literacy is no more a problem, or access to Internet is no more a problem, or access to technology is not a problem. When we come to developing nations, there are many other access as well. There’s a language barrier. AI models cannot work in the native languages still. If we want to train them, we don’t have enough data, and if they are trained using the publicly available data, there are other consequences of copyright and other things. This makes the development of AI a bit complex. But if the developed nation help those countries to build the capacity, maybe this developing nation can leverage the power of AI to actually complement the issues that we have been facing. To complement giving the access to technologies, making systems more accessible in terms of language or any other barriers that we have, we might be able to even complement it in having medical facilities in the rural areas, or enhancing the education system by implementing the AI system in the education. We can create virtual teachers who can interact with them, or personalized tutor can be implemented. There are many cases where AI could be leveraged in developing nations, but we have to be very mindful that these considerations are made that it is a global debate. How do we make AI system more responsible? That should be both societal and then technological. If there are already biases in the society, AI algorithm will definitely be biased. Until and unless the bias in the society is eliminated, I think, because it is based on the data that is available in the society, it is the data that we have created. So we have to be very mindful what we feed the AI system, that is very important. And if we do that from a, because there is one thing in the AI ecosystem that developing nation didn’t have, like during the 2000 dotcom boom, developing nation couldn’t leverage the power of internet like the developed nation could do that we are telling today. But AI is in a very premature phase. And if we could accommodate the needs of those developing nations and then leverage the AI, I think we can make them way more prosperous in terms of economy, in terms of other social benefits. I would like to stop here.

Auke Pals: I think we have to go for a discussion and then like, yes, we have support here. Thank you. The question is, does the AI give possibilities of leveraging AI in developing nations while upholding human rights? No, not by itself. Yes. I don’t think so. Human rights is not global. enforceable yes as a guide to develop local regulatory frameworks possibly on the contrary yes global organizations I think are being developed influenced being inspirational only to European companies operating elsewhere I’m not by itself is anyone in the panel wants to reject on the they’re rotating right

Ananda Gautam: I think there is something about global east and America’s will commit so my response would be it is like AI act itself cannot be leveraged to upholding the human rights because it will be more focused on what can be done and what can’t be done because legislations are always focused on do’s and don’ts of something but if we have rather policies that will accommodate the development of AI or how other nations are like the extraterritorial juridiction might also be how developed nation can help the developing nations to actually leverage this can be one of the options another is to uphold a human rights there are various frameworks UNESCO has one and then OECD is working on their second iteration. So those kind of frameworks would be one of the fundamental practices that could help on ensuring the human rights, not in only developing context, but in the global context.

Lisa Vermeer: Thank you. Yeah, it’s those questions. That’s good. Yeah, we plan to have like breakout discussions, but given the fact that it’s already quarter to six, we have until six o’clock and we thought, let’s just do plenary, take questions from the floor and then also from the online participants. Maybe first the question that was asked in the chat. It was about Gaza being a test ground for using AI, which I think is very urgent and has been quite shocking. Thanks, Lisa, to see that. So I’m taking the question from the chat about Gaza being the testing ground for several sorts of AI. It’s rather difficult to answer this question because there is a lot of nuance about it. So let me first say that the AI Act in the EU is, of course, an initiative to try to make AI more responsible and to avoid AI systems globally that really pose lots of risks to, for example, safety and fundamental rights of people. But the AI Act is not touching all AI systems. So the context of it is that there’s also a discussion about, for example, AI in the military domain and also military domain and defence and national security is excluded from the AI Act. But that does not mean that there’s no ongoing discussion about these areas. Even rather more so, there has been a long discussion already, especially on the international level, for example, about responsible use of AI in the digital, in a military domain, also initiated by the Netherlands, it’s called a Re-AIM trajectory. And you also have a long Geneva, mainly Geneva-based conversation about lethal autonomous weapons. Of course, that means that DASA was still, there was still a lot of AI was used there. And it’s, I’m afraid, that’s my personal opinion, I’m afraid that AI will be used for very bad purposes. But the discussion about how to tackle this, how to disincentivize this, how to make this impossible is really on the table, very straightforward. It’s being discussed between stakeholders, between governments and in UN bodies. So it gives some hope that is on the table and that there may be change. But that’s what we are, that’s where we are now.

Auke Pals: Thank you, Lisa, for answering the question from the chat, which is also a really urgent topic indeed. I would also like to ask a question to the audience, because in the audience I do see some people involved also in internet standards. And my question to the audience is, what can we learn in the creation of standards for AI from the internet standardization process? Can I give someone the floor from the audience?

AUDIENCE: Right. Yeah, I can say something about it. My name is Wouter Cobus, I’m with the platform Internet Standards. In my perspective, there’s quite a difference. where the internet itself was built on standards. And I think the standards really formed the internet as we know it right now. Whereas AI, although not my expertise, seems more to me as a technology is out there. And now we’re trying to introduce standards to, well, limit or to control AI in that sense. But it’s not really founded on standards like the internet was. So there is something to learn, but I think there’s, that’s a difference between AI and internet standards in that sense.

Auke Pals: Thank you, Wouter, for sharing your thoughts. What my reflection on that is that indeed AI is not built on standards, but indeed is now being regulated while the threats have been identified or have been more upfront. So now we’re trying to re-engineer the wheel for creating usable standards in certain domains. Is there any other reflection from the audience? Yeah, let’s move to the next slide.

AUDIENCE: I have a question. Connecting to a balancing innovation and control. So I think it’s for you, Elke. Do you think that there is a risk that big tech says no to the EU? And if so, what can be changed to balance our vision in the EU and the vision of big tech?

Auke Pals: Let me think about that. There is indeed a risk that big tech says no to EU and I do think that in not only in AI, but in also other topics, the EU is being challenged by big tech. So your question is what change to balance our vision in the EU beginning at big tech?

Lisa Vermeer: To be honest, I don’t think I have a clear answer on that. Maybe some of my other colleagues do have that. You see this happening because especially I think Meta is at the moment really ramping up against the AI Act and its consequences. For example, the AI Act is regulating large language models or general purpose AI models which comprise large language models with the goal of practice and most large companies from the US have signed up with the AI pact, an initiative from the European Commission to collaborate with companies to become compliant with the AI Act, but we now see that especially Meta but also other companies are replying to this code of practice on GPA models which are the large language models and some are more constructive and others are really saying we don’t want this because this is going to make it very hard for us. So I get this question a lot, especially during the negotiations of the AI Act, basically all countries were asked like how do you see the AI Act as a barrier to innovation? The idea is that the AI Act is not a barrier to the right kind of innovation and because it’s a risk-oriented approach, it means that a lot of AI falls outside of the scope, but to be honest, a lot of AI also falls within the scope, but then the argument is that the EU deliberately choose to have a we want responsible AI to develop and to innovate and to grow and to scale in the EU. And if large companies, for example, from the US, but also from other areas in the world, they are not responsible enough, i.e. not meeting our EU criteria. It’s the kind of AI that we don’t want. So it is a balance. But if the big tech says no to the EU, then the EU says no to the big tech. And it really means, do you want to have access to our market or not? And I think, you see, the upcoming coming of years will be interesting to see how it goes, because we have a new European Commission in Brussels, and they have quite some enforcement power also for the Digital Services Act, which is a large law impacting large platforms. And with the AI Act in almost a year, how will they play their cards? We first had Commissioner Thierry Pouton, who was really individual and forceful towards, for example, Elon Musk. Now you see that the new commissioner has a more conciliatory tone towards the ex-owner. So it really depends on how this, the law is there, but it depends on how it’s going to work out and how forceful the EU is going to stand and develop the fines, etc. But it’s still, it remains to be seen. Thank you for your question.

Auke Pals: Thank you, Lisa, for also answering the question. I saw the hand raised from Karen online. Karen, are you there? I can unmute you.

Karen: Yeah, I’m sorry. I was writing on my concern on the chat. So I think that the difference, for example, with internet is that AI is not limited to give information or grant information. The information that it gives is biased already. Another concern is that it also replicates and improves human traits. It also interprets data, for example, when it’s used on medical devices or neurotechnological devices. It will read this information, it will evaluate it, and then it will interpret this data, and then it will give feedback to this neurotechnology as well. I’m talking about, for example, these electroencephalography devices that will read, then it will interpret, and then it will send signs again to this neurotechnology to either activate or suppress some activity on the brain. It is not regulated from the design, the development, or use. It is not regulated how it will be transnationally moved. I think we have a lot of concerns. It’s a broader concern because it affects many dimensions, many fields. I do think that society does not fully understand the profound impact of using and interacting with AI. Thank you.

Auke Pals: Thank you very much, Karen. Yeah, Juliana, do you want to reply on that?

Juliana Sakai: Yeah, sure. Thank you for your comments. I think that we have been always following what are the technological development and advancements, so to say, and I think this is really the place where civil society plays a big role, like trying to explain what is going on, putting more information available, and and make it, and when I say putting more information, is also like breaking what are these consequences that you just mentioned, right, Karen? So for each kind of use in each field, we have to have like civil society monitoring how the results are going on and how the implementation are going on, how the test of each system is working. And I think that this is something that has to be parallel developed, sometimes with the help of the government. It has to, when we’re talking about a impact assessment, it has to be prior to a launch of a tube, and when it’s launched, civil society may have the information and the data to collect and to analyze what kind of impact and algorithmic bias, for example, some tool is provoking and how it might impact badly on the inequality. So I think that this is pretty much the field that we’ll have to work on for at the end of the day, the civil society, the population, the consumers, the users as a whole have more info on how we should protect ourselves, right? And for this, also the organized civil society and academia and journalists are there to spread the information and support all the advocacy work. And this is really important because at the end of the day, the institutions are accountable once also if the civil society is demanding. So there is a flux. and a kind of civil society demand and the institutions answering to this. So we have to press and demand that the institution takes the action, the real measures to protect and to implement the regulations that are being proposed.

Auke Pals: Thank you very much for your response. I’m getting the sign already that the session is nearly to its end. I would like to give the opportunity for someone to reflect or make any last comment if there is none. I would like to thank my panelists, Lisa, Ananda, Juliana, for being part of the session. I do also really think that much discussion can go on on this topic, but not within the 60 minutes that we’ve received today. With this, I would really encourage you to stay in touch with us through LinkedIn, add us if you need us or want to start a new discussion. With this, I would like to close the session. Thank you very much. Thank you, Juliana and Manon. Thank you. Bye-bye. Thank you very much. Thank you very much. Yeah. It’s over. yeah yeah

L

Lisa Vermeer

Speech speed

152 words per minute

Speech length

2035 words

Speech time

802 seconds

EU AI Act could become de facto global standard for AI governance

Explanation

The EU AI Act sets safety requirements for AI systems entering the EU market. Companies may build one type of AI system meeting EU requirements to sell globally, potentially making it a de facto standard.

Evidence

Example of AI systems for health sector being built to EU standards but benefiting hospitals worldwide

Major Discussion Point

Impact of EU AI Act globally

Agreed with

Ananda Gautam

Auke Pals

Agreed on

Global impact of EU AI Act

Differed with

Ananda Gautam

Differed on

Global impact of EU AI Act

EU AI Act may not be suitable for replication in other regions due to different regulatory contexts

Explanation

The EU has a specific regulatory ecosystem for the digital economy that differs from other regions. The product regulation approach of the AI Act may not be easily replicated elsewhere.

Evidence

Mentions existing EU regulations like GDPR and Data Act as context for AI Act

Major Discussion Point

Impact of EU AI Act globally

Differed with

Ananda Gautam

Differed on

Global impact of EU AI Act

A

Ananda Gautam

Speech speed

124 words per minute

Speech length

1000 words

Speech time

482 seconds

EU AI Act’s extraterritorial jurisdiction could influence AI products/services outside EU

Explanation

The EU AI Act has extraterritorial jurisdiction, meaning it can regulate AI products and services from outside the EU. This could influence AI development globally.

Major Discussion Point

Impact of EU AI Act globally

Agreed with

Lisa Vermeer

Auke Pals

Agreed on

Global impact of EU AI Act

Differed with

Lisa Vermeer

Differed on

Global impact of EU AI Act

Lack of capacity in developing nations to implement or create AI standards

Explanation

Developing nations often lack the technological capacity and expertise to implement AI standards or create their own AI regulations. This creates challenges in global AI governance.

Evidence

Mentions digital divide and lack of internet access for over 2 billion people in Global South

Major Discussion Point

Challenges in global AI standardization

Agreed with

Auke Pals

Agreed on

Challenges in global AI standardization

Potential to leverage AI to address development challenges

Explanation

AI could be used to address development challenges in Global South countries. It could help close the digital divide and empower populations with lower digital literacy.

Evidence

Examples of using AI for medical facilities in rural areas and enhancing education systems

Major Discussion Point

AI challenges and opportunities for developing nations

Need for frameworks to ensure AI upholds human rights globally

Explanation

Global frameworks are needed to ensure AI upholds human rights, especially in developing nations. Existing frameworks like UNESCO’s and OECD’s could be fundamental practices for ensuring human rights in AI.

Evidence

Mentions UNESCO and OECD frameworks as examples

Major Discussion Point

AI challenges and opportunities for developing nations

Importance of accommodating needs of developing nations in global AI governance

Explanation

Global AI governance should accommodate the needs of developing nations. This could help these countries leverage AI for economic and social benefits, unlike during the dotcom boom.

Evidence

Comparison to dotcom boom where developing nations couldn’t leverage internet power like developed nations

Major Discussion Point

AI challenges and opportunities for developing nations

A

Auke Pals

Speech speed

102 words per minute

Speech length

1570 words

Speech time

917 seconds

Big tech companies may resist complying with EU AI Act requirements

Explanation

There is a risk that large technology companies might refuse to comply with EU AI Act requirements. This creates a challenge in balancing EU’s vision for AI governance with the interests of big tech.

Evidence

Mentions Meta ramping up against AI Act and its consequences

Major Discussion Point

Impact of EU AI Act globally

Agreed with

Lisa Vermeer

Ananda Gautam

Agreed on

Global impact of EU AI Act

Fragmentation and overlap between different standardization bodies

Explanation

Multiple standardization bodies are active in creating AI standards, leading to fragmentation and overlap. This creates challenges in establishing coherent global AI standards.

Evidence

Observation of different standardization bodies trying to deal with AI in their own way

Major Discussion Point

Challenges in global AI standardization

Agreed with

Ananda Gautam

Agreed on

Challenges in global AI standardization

Balancing regulation, standardization and innovation

Explanation

There is a need to balance AI regulation and standardization with innovation. Strict regulations might hinder innovative initiatives, especially for small startups.

Evidence

Example of small startups not looking at industry standards first, but creating according to their best practices

Major Discussion Point

Challenges in global AI standardization

Rapidly evolving AI technology landscape makes standardization difficult

Explanation

The AI field is rapidly evolving, making it challenging to create timely and relevant standards. By the time standards are developed, the technology may have already advanced significantly.

Evidence

Example of generative AI becoming significant during final stages of EU AI Act negotiations

Major Discussion Point

Challenges in global AI standardization

Agreed with

Ananda Gautam

Agreed on

Challenges in global AI standardization

J

Juliana Sakai

Speech speed

101 words per minute

Speech length

1231 words

Speech time

729 seconds

Monitoring government use of AI systems

Explanation

Civil society plays a crucial role in monitoring how governments use AI systems. This involves identifying problems in implementation and advocating for institutions to take measures.

Evidence

Example of Brazilian civil society filing complaints against facial recognition systems used without consent

Major Discussion Point

Role of civil society in AI governance

Advocating for transparency and accountability in AI implementation

Explanation

Civil society organizations advocate for transparency in AI implementation by both governments and companies. This allows for assessment of what is working and what isn’t in AI governance.

Evidence

Mentions need for real transparency to assess information from government and companies

Major Discussion Point

Role of civil society in AI governance

Facilitating dialogue between stakeholders on AI governance

Explanation

Civil society plays a role in facilitating dialogue between different stakeholders on AI governance. This includes spreading information and supporting advocacy work.

Evidence

Mentions civil society, academia, and journalists working to spread information and support advocacy

Major Discussion Point

Role of civil society in AI governance

Agreements

Agreement Points

Global impact of EU AI Act

Lisa Vermeer

Ananda Gautam

Auke Pals

EU AI Act could become de facto global standard for AI governance

EU AI Act’s extraterritorial jurisdiction could influence AI products/services outside EU

Big tech companies may resist complying with EU AI Act requirements

The speakers agree that the EU AI Act has potential for global impact, whether through becoming a de facto standard, influencing products/services outside the EU, or causing resistance from big tech companies.

Challenges in global AI standardization

Ananda Gautam

Auke Pals

Lack of capacity in developing nations to implement or create AI standards

Fragmentation and overlap between different standardization bodies

Rapidly evolving AI technology landscape makes standardization difficult

Both speakers highlight various challenges in creating and implementing global AI standards, including capacity issues in developing nations, fragmentation among standardization bodies, and the rapidly evolving nature of AI technology.

Similar Viewpoints

Both speakers emphasize the importance of responsible AI use and governance, particularly in addressing development challenges and ensuring proper government use of AI systems.

Ananda Gautam

Juliana Sakai

Potential to leverage AI to address development challenges

Monitoring government use of AI systems

Unexpected Consensus

Importance of civil society in AI governance

Ananda Gautam

Juliana Sakai

Need for frameworks to ensure AI upholds human rights globally

Advocating for transparency and accountability in AI implementation

Facilitating dialogue between stakeholders on AI governance

While not unexpected, there was a strong consensus on the crucial role of civil society in AI governance, spanning from developing nations’ perspective to more general global governance issues.

Overall Assessment

Summary

The main areas of agreement include the global impact of the EU AI Act, challenges in global AI standardization, the potential of AI to address development challenges, and the importance of civil society in AI governance.

Consensus level

There is a moderate level of consensus among the speakers, particularly on the challenges and potential impacts of AI governance. This consensus suggests a shared understanding of the complex issues surrounding AI regulation and standardization, which could facilitate more coordinated efforts in addressing these challenges globally.

Differences

Different Viewpoints

Global impact of EU AI Act

Lisa Vermeer

Ananda Gautam

EU AI Act could become de facto global standard for AI governance

EU AI Act may not be suitable for replication in other regions due to different regulatory contexts

EU AI Act’s extraterritorial jurisdiction could influence AI products/services outside EU

Lisa Vermeer presents both arguments for and against the EU AI Act becoming a global standard, while Ananda Gautam focuses more on its potential influence through extraterritorial jurisdiction, particularly on developing nations.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the global impact of the EU AI Act, the challenges in implementing global AI standards, and the role of developing nations in AI governance.

difference_level

The level of disagreement among the speakers is moderate. While there are differing perspectives on certain issues, there is also a significant amount of agreement on the challenges and complexities of global AI governance. This level of disagreement is constructive for the topic at hand, as it highlights the multifaceted nature of AI regulation and the need for diverse perspectives in shaping global AI policies.

Partial Agreements

Partial Agreements

All speakers agree on the challenges of implementing global AI standards, but they focus on different aspects: Lisa on regulatory contexts, Ananda on capacity issues in developing nations, and Auke on balancing regulation with innovation.

Lisa Vermeer

Ananda Gautam

Auke Pals

EU AI Act may not be suitable for replication in other regions due to different regulatory contexts

Lack of capacity in developing nations to implement or create AI standards

Balancing regulation, standardization and innovation

Similar Viewpoints

Both speakers emphasize the importance of responsible AI use and governance, particularly in addressing development challenges and ensuring proper government use of AI systems.

Ananda Gautam

Juliana Sakai

Potential to leverage AI to address development challenges

Monitoring government use of AI systems

Takeaways

Key Takeaways

The EU AI Act could potentially become a de facto global standard for AI governance, but may not be suitable for direct replication in other regions

Global AI standardization faces challenges like fragmentation between bodies, balancing regulation and innovation, and rapidly evolving technology

Civil society plays important roles in AI governance including monitoring, advocacy, capacity building, and facilitating dialogue

Developing nations face challenges in AI adoption but also opportunities to leverage AI for development if their needs are accommodated in global governance frameworks

Resolutions and Action Items

None identified

Unresolved Issues

How to effectively enforce broad AI regulations like the EU AI Act

How to balance innovation and control in AI governance, especially with resistance from big tech companies

How to address the lack of quality data and technological capacity for AI in developing nations

How to ensure AI systems uphold human rights globally, especially in military/security domains

Suggested Compromises

Developing AI governance frameworks that can serve as guides for local regulatory frameworks rather than direct replication of EU AI Act

Balancing strict requirements for high-risk AI applications with more flexibility for low-risk innovations

Collaboration between developed and developing nations to build AI capacity while addressing local needs and contexts

Thought Provoking Comments

The EU AIX enforcement promises to be really challenging because the AIX is very broad and it sets rules for different areas, a whole range of areas, basically touching the whole, all industries and all public areas and how do you effectively enforce a law?

speaker

Lisa Vermeer

reason

This comment highlights a critical challenge in implementing AI regulation on a broad scale, raising important questions about practical enforcement.

impact

It shifted the discussion from theoretical benefits of EU AI regulation to practical challenges of implementation and enforcement across diverse sectors.

What we did see in regulation is when the EU AI Act was being developed, generative AI was not that big in the beginning, but later on in the final stages of the negotiation, it became quite big and there was an urge and a need to regulate that.

speaker

Auke Pals

reason

This observation underscores the rapid pace of AI development and the challenge of creating regulations that can keep up with emerging technologies.

impact

It introduced the idea of the dynamic nature of AI technology and the need for flexible, adaptable regulation approaches.

In the context of developing nations and on the context of human rights, so the basic fundamental of AI system that we talk is about the quality of the data and the bias of the data.

speaker

Ananda Gautam

reason

This comment brings attention to the often overlooked challenges faced by developing nations in AI development and regulation, particularly regarding data quality and bias.

impact

It broadened the discussion to include global perspectives and highlighted the potential for AI to exacerbate existing inequalities.

AI is not limited to give information or grant information. The information that it gives is biased already. Another concern is that it also replicates and improves human traits. It also interprets data, for example, when it’s used on medical devices or neurotechnological devices.

speaker

Karen

reason

This comment raises complex ethical and practical concerns about AI’s ability to interpret and influence human behavior, particularly in sensitive areas like healthcare.

impact

It deepened the conversation by introducing more nuanced concerns about AI’s societal impact beyond just information provision, touching on issues of autonomy and medical ethics.

Overall Assessment

These key comments shaped the discussion by broadening its scope from initial focus on EU regulation to encompass global perspectives, practical implementation challenges, and deeper ethical considerations. They highlighted the complexity of AI governance, emphasizing the need for flexible, culturally sensitive approaches that can adapt to rapidly evolving technology while addressing fundamental issues of data quality, bias, and human rights.

Follow-up Questions

How can we effectively enforce broad AI regulations like the EU AI Act?

speaker

Lisa Vermeer

explanation

This is a major challenge for regulators and policymakers, as the broad scope of the AI Act makes oversight and enforcement complex.

How can we balance AI regulation, standardization, and innovation, particularly for small startups?

speaker

Auke Pals

explanation

There’s a need to find ways to regulate AI without hindering innovation, especially for smaller companies with limited resources.

How can developing nations build capacity to implement or create their own AI regulations?

speaker

Ananda Gautam

explanation

Many developing countries lack the technical and policy expertise to effectively regulate AI, which could lead to implementation challenges or inadequate protections.

How can AI be leveraged in developing nations to address issues like digital divide, language barriers, and access to education and healthcare?

speaker

Ananda Gautam

explanation

There’s potential for AI to help solve development challenges, but this requires careful consideration of local contexts and needs.

How can we ensure AI systems are trained on unbiased, high-quality data, particularly in developing nations?

speaker

Ananda Gautam

explanation

The quality and representativeness of training data is crucial for creating fair and effective AI systems, but this is particularly challenging in contexts with limited data infrastructure.

How can the international community address the use of AI in military contexts, given that this is often excluded from civilian AI regulations?

speaker

Lisa Vermeer

explanation

The use of AI in military applications raises significant ethical and security concerns that aren’t addressed by regulations like the EU AI Act.

How will the relationship between big tech companies and EU regulators evolve with the implementation of the AI Act?

speaker

Audience member (unnamed)

explanation

There’s tension between tech companies’ desire for innovation and the EU’s regulatory approach, which could impact the development and deployment of AI technologies.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.