WS #162 Overregulation: Balance Policy and Innovation in Technology

18 Dec 2024 08:00h - 09:15h

WS #162 Overregulation: Balance Policy and Innovation in Technology

Session at a Glance

Summary

This workshop focused on balancing AI regulation and innovation, exploring how to foster technological advancement while ensuring safety and ethical standards. Panelists from diverse backgrounds discussed various regulatory approaches to AI governance, including risk-based, human rights-based, principles-based, rules-based, and outcomes-based models. They emphasized the need for flexible, adaptable regulations that can keep pace with rapid technological changes.


Key issues addressed included the role of AI in combating child sexual abuse material (CSAM), the importance of human rights in AI governance, and the challenges of implementing AI in healthcare. Panelists stressed the need for context-specific regulations, noting that a one-size-fits-all approach may not be suitable across different regions and sectors.


The discussion highlighted the importance of public participation in developing AI policies and the need for capacity building and digital literacy. Panelists shared examples of how AI has been used innovatively during crises like the COVID-19 pandemic, demonstrating the potential benefits of flexible regulatory approaches.


The workshop also touched on the challenges of regulating AI without stifling innovation, with some arguing that policies might be preferable to strict regulations in certain cases. The importance of considering local needs and existing regulatory frameworks when developing AI governance strategies was emphasized.


Overall, the discussion underscored the complexity of AI regulation and the need for a balanced approach that protects human rights and public safety while allowing for technological progress and innovation.


Keypoints

Major discussion points:


– Balancing AI regulation and innovation


– Different approaches to AI governance (e.g. risk-based, human rights-based, principles-based)


– Challenges of regulating AI, including privacy concerns and potential misuse (e.g. for child exploitation)


– Need for AI literacy and capacity building, especially in developing countries


– Importance of considering local context when developing AI policies


Overall purpose:


The goal of this discussion was to explore how to effectively regulate AI technologies in a way that promotes innovation while also protecting public safety and ethical standards. The panelists aimed to share diverse perspectives on AI governance approaches from different regions and sectors.


Tone:


The overall tone was thoughtful and constructive. Panelists acknowledged the complexity of the issues and the need to balance different priorities. There was general agreement on the importance of regulation, but also caution about over-regulation stifling innovation. The tone remained analytical and solution-oriented throughout, with panelists offering nuanced views on different regulatory approaches.


Speakers

– Nicolas Fiumarelli: Moderator, represents the Latin American and Caribbean group from the technical community


– Natalie Tercova: Chair of the ICF in Czech Republic, member of ICANN, vice facilitator on the board of ISOC Youth Standing Group, PhD candidate focusing on digital skills of children and adolescents


– Paola Galvez: Tech policy consultant, founding director of IDON AI Lab, UNESCO’s lead AI national expert in Peru, team leader at the Center for AI and Digital Policy


– Ananda Gautam: Represents the Youth Coalition on Internet Governance, global AI governance expert


– James Nathan Adjartey Amattey: From the private sector in Africa, focuses on innovation and impact on regulatory practices


– Osei Manu Kagyah: Online moderator


Additional speakers:


– Agustina: Audience member from Argentina


Full session report

AI Regulation and Innovation: Striking a Balance


This workshop explored the complex challenge of balancing AI regulation with innovation, bringing together experts from diverse backgrounds to discuss various approaches to AI governance. The discussion, moderated by Nicolas Fiumarelli, highlighted the need for flexible, adaptable regulations that can keep pace with rapid technological changes while ensuring safety and ethical standards.


Key Themes and Discussions


1. Approaches to AI Regulation


Paola Galvez, a tech policy consultant, stated that we are past the question of whether to regulate or not, and now the focus is on how to regulate. She outlined several regulatory approaches, including:


– Risk-based


– Human rights-based


– Principles-based


– Rules-based


– Outcomes-based


Galvez emphasized that regulation should not stifle innovation and stressed the importance of human rights-based approaches to AI governance, while acknowledging the implementation challenges these approaches face.


Ananda Gautam, representing the Youth Coalition on Internet Governance, advocated for flexible, principle-based approaches that can foster innovation while protecting rights. He offered a historical perspective, noting that if the internet had been heavily regulated in its early days, it might not have developed into the tool we use today.


Natalie Tercova, from the healthcare sector, proposed a risk-based approach, suggesting that high-risk AI applications should undergo rigorous review, while low-risk innovations could proceed under lighter regulatory requirements.


2. Balancing Innovation and Safety


James Nathan Adjartey Amattey, from the private sector in Africa, pointed out that the COVID-19 pandemic demonstrated the need for innovation over rigid regulation in times of crisis. He argued that certain regulatory frameworks are necessary for innovation to flourish, challenging the notion that regulation and innovation are inherently opposed.


3. Context-Specific Regulation


Panelists stressed the importance of developing context-appropriate AI governance, particularly for developing countries. Paola Galvez cautioned against simply copying EU regulations, arguing that local needs and existing regulatory frameworks should inform AI governance strategies.


4. AI Literacy and Capacity Building


James Nathan Adjartey Amattey highlighted the need for AI literacy programs for regulators, developers, and users to understand the risks and benefits of AI technologies. Paola Galvez emphasized that digital skills development is key to leveraging AI’s potential, particularly in developing countries.


5. Ethical Concerns and Human Rights


Natalie Tercova raised the issue of AI’s dual use in both creating and detecting child sexual abuse material (CSAM), highlighting the complex balance between leveraging AI for child protection and ensuring privacy rights. She discussed the challenges of using AI to detect CSAM while also acknowledging its potential role in generating such content.


6. Public Participation and Multi-stakeholder Collaboration


Panelists agreed on the importance of public participation in developing AI policies. Paola Galvez emphasized that multi-stakeholder collaboration is crucial for creating effective and inclusive AI governance frameworks.


Audience Questions and Responses


The session included a brief Q&A period, where audience members raised questions about:


– The role of AI in addressing climate change


– Strategies for promoting responsible AI development


– The potential for AI to exacerbate existing inequalities


Due to time constraints, not all questions could be addressed in depth, but panelists provided brief responses highlighting the need for continued research and dialogue on these topics.


Unresolved Issues and Future Directions


Several unresolved issues emerged from the discussion, including:


1. How to effectively balance privacy and safety in AI-powered content moderation


2. The extent of responsibility for AI developers and companies for the effects of their technologies


3. How to address the growing AI divide between developed and developing countries


The discussion highlighted the need for continued dialogue and collaboration to address these complex challenges.


Conclusion


The workshop concluded with a group photo of the panelists and moderator. Throughout the session, speakers emphasized the importance of flexible, context-specific governance strategies that can adapt to rapid technological changes and address the unique needs of different regions and sectors. The diverse perspectives shared by the panelists provided valuable insights into the ongoing challenges and opportunities in AI regulation and innovation.


Session Transcript

Nicolas Fiumarelli: worries. We invite everyone to sit at the main table if you want, so you can be more engaged in the session. Yes, we have one speaker that is stuck in the route, in the Uber, but we will start the session and then he will join later. So okay, good morning, afternoon, everyone, for the ones online. It’s a great pleasure to welcome you all to this workshop called Uber Regulation, Balancing Policy and Innovation in Technology, under the sub-theme of the Harnessing Innovation and Balancing Risk in the Digital Space. My name is Nicola Fiumarelli and I will be moderating the session. I represent the Latin American and Caribbean group from the technical community and it’s a privilege to be among such a distinguished group of panelists and participants. I am very glad that we have this quantity of people in the room. I think the session title is very interesting for you. You know, we are in an era when we need to decide if regulate or not regulate, so this is a hot topic nowadays. You know, technology and innovation have always been drivers of societal progress. However, the phase space evolution of the digital technologies, especially artificial intelligence, presents unique challenges. So how can we foster innovation without stifling it through over regulation? How do we ensure safety and the ethical standards while allowing technology to reach its full potential? These are some of the critical questions we are going to address today, but this requires collective deliberation, so you are all invited to give your ideas and today we aim to address them. So the session will be conducted, as you know, in this roundtable format to encourage equal participation and also interaction among our esteemed panelists and the audience. To set the stage, I just will briefly introduce our panelists. Following this, each of the panelists will take a moment to introduce themselves and share a first motivation for participating in this session. Afterwards, we will dive into the core discussion, addressing some of the key policy questions. Toward the end, we will open the floor for questions from the audience, both online and on-site here, moderated by our colleague, Jose. So let’s meet our panelists. First, Natalie, Natalie Terkova. She’s the chair of the ICF in Church Republic, also a member of the ICANN, and vice facilitator on the board of the ISOC Youth Standing Group. You know, the ISOC Youth Standing Group, together with the Youth Coalition and the Internet Governance, every year organize youth-led sessions to bring the young voices to the Internet Governance Forum. She is also a PhD candidate, focusing on the digital skills of children and adolescents, their online opportunities and risks, with the emphasis on online safety, online privacy, and AI. And she has recently contributed to a report on AI usage in medical spheres, exploring the challenges of deploying AI technologies in health care. Additionally, her works includes critical research on the role of AI in addressing child sexual abuse


Natalie Tercova: material, called CSAM. So Natalie, will you please introduce yourself further and share your motivation for joining this session? Thank you so much, Nicolas. Can anyone hear me well? Perfect. So as you said, thank you for summing it up so perfectly. I am representing the academia stakeholder group. I am a researcher. It’s my day job. And I was recently very much focused on the AI and how it can impact the critical topics I’m focusing on in my research, which on one side is the health system and, for instance, also finding health information online, how people trust health oriented information provided by, for instance, AI driven chatbots and so forth. And on the other side, I’m also invested in the topic of CSAM, as you mentioned. So harmful content focusing on children and also abuse of such materials, depicting children in intimate scenarios where AI is perceived more as a double edged sword. So I hope to tell more about this during the session as I feel this is a crucial topic. Thank you for having me.


Nicolas Fiumarelli: Thank you so much, Natalie. This is, as I say at the beginning, a hot topic, right? We are here to decide whether it is good to regulate or not regulate. There are several factors that will bring us to think about regulation. But on the other hand, you know, this could undermine human rights, like the access to information and different ones. So there are several ways to regulate. So we are looking forward to deep dive on these kind of things for maybe to have good outcomes and some key takeaways on how policymaker actually will have a solution for this kind of issues. Now I will introduce Paola, Paola Galvez. She is Peruvian and is a tech policy consultant dedicated to advancing ethical AI and human centric digital regulation globally. She holds a Master of Public Policy from the University of Oxford and serves as the founding director of IDON AI Lab, UNESCO’s lead AI national expert in Peru and a team leader at the Center for AI and Digital Policy. Paola brings her unique perspective from her work at the OECD and UNESCO on international AI governments. You all must be hearing about UNESCO these days because every country in the world actually have the national AI strategies and UNESCO has the RAM that is the Readiness Assessment and drawing on her experience with UNESCO. AI RAM in Peru, she will provide some insights into this balance in regulatory safeguards and fostering innovation on the global scale. So Paola, could you introduce yourself and tell us about your motivation for this workshop?


Paola Galvez: Good morning, everyone. Thanks so much, Nicolás. Thank you all for joining this session. I think it’s really a critical discussion to be having, but I will put the different opinion here. I don’t think the question is anymore whether to regulate or not to regulate. We’re past beyond that. It’s my opinion. And what we’re working now is on how to regulate, right? Let me go one step back because you asked me to introduce myself a bit, and you did so well. Thanks so much, Nicolás. Just to give a bit of an overview, my perspective here comes with a background starting with the private sector. I used to work at Microsoft for almost five years, and it was me saying, innovation must come. Please do not prevent innovation in my country, which is a developing one. So advocating for auto-regulation, but that’s back 2013 when artificial intelligence was just starting in my country. I mean, in other countries, it was way developed, but the topic at the moment was cloud computing. So just to mention, that was my first version. Then I worked at the government. I advised the Secretary of Digital Transformation of Peru, and that was an absolute meaningful role because I contributed to the AI National Strategy, and I led the Digital Skills National Strategy. So that changed a bit, and I understood how it’s to work in the government, what are the challenges inside. I’m not saying good or justificate anything, but it happens. Then I paused my career and went to Oxford to study, and that’s what brought me to international organizations. And at the moment, I’m an independent consultant working for UNESCO and the OECD, contributing to my country because I just finished the UNESCO AI Readiness Assessment Methodology. I can tell more about it later. And also, I founded Idonia Lab, which is Spanish, which is Idonia, Idonia Lab, trying to make AI benefit everyone. through evidence-based digital regulation and capacity building, directed to women. Now going to the topic and what motivated me when Nicolás came with the idea of having this discussion, it was all in from the moment zero because I thought, yes, now that I’m in the global perspective, I live in Paris at the moment, and this question is not only happening in developing countries, even, I speak with a lot of startups in Paris, now they don’t know how to implement the EUAA Act, so it’s been crazy and there’s a lot of doubts. So I’d like to start and just leave it here, but by giving three key questions to start this conversation, I think first we need to think what is the public policy problem, what we want to regulate comes from that very first question. Regulation is needed to address public policy problems, fundamental and collective rights, so let’s find a more adequate solution by finding the problem, that’s the first step. Second, when to regulate? Map available regulatory instruments, because most of us have consumer codes or consumer law, not every country has data protection law, but that’s a good start, right? Intellectual property laws are in place, so let’s see what we have, and assess the feasibility of adopting this, because bringing to Congress, here we have a member, somebody working in the Parliament of Argentina in the table, for instance, it takes a lot of time, so if we can start enforcing the laws that we already have in place, that’s a good start. And the third question is how to regulate? Identifying a combination of AI regulatory approaches, I’m happy to tell you more about this, Nicolas, there’s several approaches to regulate AI at the moment, and there is no one single best one, but we can need to find according to our context. Thank you.


Nicolas Fiumarelli: Thank you, Paola, just summarising that. So, the first approach is what is the public policy problem, then is about when to regulate and at the end is how to regulate, okay. So, now I will introduce Ananda here on my left. He was stuck in the over in the traffic but he made it. Thank you Ananda to be with us. Ananda represents the Youth Coalition on the Internet Governance and is a global AI governance expert. He has extensive insights into the global regulatory impacts on technological innovation. So, Ananda, please share more about your work and what you bring today on our discussion.


Ananda Gautam: Thank you, Nicolas, and all the panelists. I’m so honored to be here with you guys, with some hiccups, so of course, anyway, so I mostly work with young people. I think capacity building of young people, helping people start their youth initiatives in their countries and how we bring young people to the global internet governance, and not only global, how do we engage in their capacity building in regional and national levels. So, that is my major focus right now. So, I am also working on different AI policies. Paula and myself did join the K-Dev together. I think Sabah is also here, Sabah was also part of our cohort. We have been learning about how developing AI landscape is affecting, and my perspective is very more concerned in the developing nations, because I come from the global south, and from the global south as well, Nepal itself is a very difficult area, and we have many challenges. We have outdated legislations, and some recent legislation like the EU-AI Act, which have extra territorial jurisdiction. It is not only applied in the EU region, so-called Brussels effect is affecting the legislation worldwide. So, countries like Nepal are also trying to build their own AI policies, but as I have been focusing on is, how do we build the capacity of those developing nations, so that they can build a comprehensive AI policy that will actually leverage the power of AI in developing those nations, and of course, how do we build our capacity of young people in engaging in AI governance processes. Another thing is, while we talk about digital divide, we have been seeing now AI divide, you know, like the people having access to AI, and then like people… not having access to AI so my focus would be on how do we build capacity of other stakeholders so that we can eliminate this divide so I think I’ll go on other things on second round. Thank you, Nicholas.


Nicolas Fiumarelli: Thank you, Ananda, and just summarizing that is a great issue to address on the developing nations right because there are difficult areas, as you mentioned, not every country is prepared or has different challenges while updating the legislation every legislation is different in every country. And in the, in the light of the AI Act, that is a mandatory thing and about these territorial jurisdictions right because, you know, the internet by nature is without frontiers. So, it’s very difficult sometimes to regulate this kind of things. In my opinion, as a technical and from the core of the internet, because you know IP addresses are not like country means so sometimes it’s not easy to, to see how, how, how a legislation from a country can affect the way we regulate on the internet because, as I said at the beginning the internet is for nature, a trans frontier, and as the excellency from Saudi Arabia say at the opening ceremony. And I mentioned we are on the AI divide now so this is a new concept that we need to take off and see how to leverage the power of AI as you say Ananda and how to about the young people that is using AI a lot, right. So, now I am going to introduce James, that is our speaker online. If the technicals can put James at the screen will be great. James comes from the private sector in Africa, where he focuses on innovation and the impact on regulatory practices. is going to share examples of how African innovations navigate regulatory challenges and three in the face of adversity. So James, you are with us there. Please introduce yourself for the people here on site. We have a full room. So share what drives your interest in this discussion, please.


James Nathan Adjartey Amattey: So thank you very much, Nicolas, for that introduction. My name is James Amate. I am from Ghana. And I basically come from a background of products management, software development, where we’ve had to create both consumer and enterprise products for education and insurance and also in banking. And I’ve realized in that space that innovation does not have to happen in a vacuum. There are certain regulatory frameworks that need to happen for innovation to come to the forefront. Now, my friend Adan was stuck in traffic. He was complaining about Uber. And Uber is one of those innovations that came about through, should I say, the advancement of technology. And one of those advancements is something we call GPS tracing or GPS tracking. Without that GPS tracking and without that being embedded in phones, we would not have something called Uber. And without internet, a platform like Uber will not be able to locate our friend and get him to the site. And these are some of the things that we tend to. lose sight of sometimes because when we try and build software and we are trying to do digital transformation we think, oh, it’s all about doing customer research and doing market research, but in all there are certain societal and regulatory things that need to trigger innovation to happen. So for example, I just gave an example of. I just give an example for Uber. Now there are certain examples around regulations so when we look at the internet for example, how it came about was the decoupling of AT&T. And that led to the widespread infrastructure, infrastructure development that brought forth the internet. Right so without that, we would not have things like broadband, you know, it’s not directly correlated but the breaking down the regulation that led to the breaking down of AT&T especially in the United States is what, you know, sort of brought forth advancements in the development of the internet as we know it today. Now there are certain times when, when policy tries to get in the way of innovation. And I would use one of our local laws, as an example, we call it the eLevy is a very notorious regulation that came to bring a tax on. Should I say financial transactions done online. Right. Now the problem with that was that Ghana was at the beginning of digital financial literacy so a lot of people were now beginning to transact online. So, the law in of itself was not bad. but I think the timing and then the implementation of it did not have a lot of, should I say, public approval. And, you know, according to reports, there were times when the government of Ghana, you know, missed revenue targets, you know, from taxation, and also the utilization of mobile money also reduced because of the tariffs. So sometimes, you know, regulation, you know, may have a good idea, good intention, but sometimes it’s the implementation of the regulation that makes it difficult for innovation to come forth. So I think as much as regulation is important, we also have to look at the timing of the regulation, especially in Africa where we are mostly now catching up to a lot of innovation. We do not have a lot of homegrown built solutions, so most of the solutions we use are imported. So we have to take our time with regulation and to make sure that there is enough understanding and there’s enough appreciation, and there’s enough, should I say, uses or use cases for the technology before we try to regulate it. Now, I do understand that, you know, sometimes the timing of regulation is important. So, and as much as we do not want any regulations, we also do not want to list regulations where it’s very difficult for, where the technology runs ahead of the society and it’s very difficult for us to control it. So thank you very much for the floor. Over to you, Nicolas. Thank you so much, James. So you also touched on this idea of regulatory frameworks. sometimes happen for innovation, right? But a population needs to be prepared. Innovation is not bad, as you say, but we need public approval. There are different cultures, there are different social at the countries. And while you mentioned that digital money reduce tariffs, on the other hand, that could be a difficult for someone that doesn’t know how to use the technology to use that money on the online, right? So the implementation of the regulation sometimes is a challenge and well, all the people to understand or have this digital financial literacy as you say this concept. So finally, we have Osei, Osei Manukagia here at the table. He’s our online moderator and is taking all the questions from the chat and he will also address questions or direct the people here on site to take the mic. So Osei will ensure this active participation for our visual audience and on site. So Osei, please introduce yourself and tell us about your role in the internet governance and in supporting this session.


Osei Manu Kagyah: Hello, thank you very much. This topic is such an important subject matter, balancing policy and innovation. I’ll be your online moderator and also help moderating on site. So if you have any question, you just raise your hand or if you are joining us online, you just have your question in the chat box. As a public interest technologist, this topic is very, very interesting to me. I love how you put it, how do we go about the regulation? The issue about regulation, I think we’ve moved past it. Is it a silver bullet? And if we are going about it, how do we approach it? The nuances we hope to delve. So we are very excited to join this conversation. Break all your inputs and we help dive deeper. Thank you very much. Over to you, Nicolas.


Nicolas Fiumarelli: Thank you so much, Osei. So we will now begin the core discussion here of today’s workshop. So focusing on critical policy question, each speaker will have approximately three to five minutes to respond the questions directed at them. So policy question number one, what specific regulations currently hinder the adoption of AI and how can this be reformed to promote technological advancement to ensure safety? Natalie, specifically for you, about your recent work on child sexual abuse material, focuses also on AI’s role in the creation of this material, right? So can you explain what CSAM is? Where do you see the need for regulation there?


Natalie Tercova: Thank you so much, Nicolas. So some of you may be joining also my lightning talk, which I delivered on the day one, if I’m not wrong. So I hope I will not repeat myself here. I feel like extending the discussions on CSAM and how it affects the youth, the survivors and also overall the society and the wellbeing of those involved is now at its next step when we talk about AI and how does this come into the place. And it’s deepening the crucial aspects of this issue that we are focusing on. So first, let me just start by saying that what it is. Usually we talk about child pornography or these types of terms are more known to us. So right now we are focusing more on this shortcut CSAM, which stands for child sexual abuse material because it is more broad and it allows us also to involve and it’s something that can be manipulated through, for instance, technologies using AI models and so forth. Because recently, for instance, in Czech Republic, where I come from, we saw a big prevalence of images that were partially from already existing materials that were not harmful, were spread online, sometimes from the child themselves, sometimes from a caregiver, from a teacher who captured moments somewhere at the school property. However, then the AI stepped in or someone using the tool to make the person naked, for instance. And suddenly such existing, already existing material was then abused and transformed into CSAM. So that is why we’re now also focusing on AI in relation to CSAM. So with this, I just want to highlight that now introducing the AI, we have lots of discussions on how this can be potential for harm, unfortunately. However, also some people perceive it as a potential to use AI for detection. And there is this clash between, can AI tools and new emerging technologies be something that can help us tackle this issue, or if it’s gonna make everything way worse. And this is what I want to bring to this debate. Just to give you an idea about how prevalent the CSAM is, I have just a few stats. In 2023, in over 223 countries, 50,000 websites, they were hosted CSAM materials and they were detected and taken down. However, we also have to be mindful that this is just the tip of the iceberg. There are so many things that we just don’t know about and can be in some closed forums, somewhere in the deep web and so forth. So this is just the tip of the iceberg and it is already so alarming. If we look at the specific types of materials such as pictures, videos, these are around 85 millions that has been reported globally in the year 2021. So this is a very alarming number. We have to talk about deepfake technologies and how we can use some, let’s say, advanced editing tools to make it easier for those perpetrators to make it manipulated, to make images and videos suddenly used for CSAM. And this is not just for those who see some form of excitement in these types of materials, but also there is a big money involved because they know that there are people who are willing to buy such materials from these people. So we are now trying to find a balance between how we can still ensure that people are using the new technologies, which actually do have a big potential for helping us tackling all sorts of harmful content, not just CSAM, but also protecting our privacy and protecting the privacy of those most vulnerable ones, in this case, children. Another thing is that right now we are talking a lot about these potential softwares that can detect when CSAM is around. However, this can go also back to grooming, which is the act when the perpetrator is slowly manipulating the child, and this is happening through text usually. But that would mean that some form of software would read the text we are typing somewhere, and that is a big clash between privacy and safety. So here I am also excited to hear your opinions on this issue, where we can find a sweet spot, where we can find a good balance between the right to use these technologies in our advance, use all the opportunities that AI and other emerging technologies can bring us, but also to minimize the risks that are involved with it. Thank you, Nico. Thank you, Natalie. You touched on some interesting things because


Nicolas Fiumarelli: as you mentioned money involved here, there are people that want to buy this kind of content, honest content, also how to ensure this balance, right, because there are people that using technology for the good, right, AI brings a lot of innovation, you know, also for creativity, also for, wow, if you see the advancements nowadays, you see that it’s very useful for a lot of wars, there are a lot about, if you see the discussions on the policy network and artificial intelligence on the show replacements, right, it’s something that is happening worldwide, and the digital divide is increasing because of this, and it’s called the AI divide, as Ananda said, so, but how to protect the privacy as well, because there are solutions, as you mentioned, that Rubin or software to detect, but it’s more on the surveillance part, right, it’s like, how to avoid these kind of practices, or in my opinion, as well, it’s like, some countries have this approach, like of, maybe they can install a software in all mobile phones, or having an agreement with the mobile companies, there are two or three, not so much, the farmers, so maybe it’s easy for them to do this technology, but on the other side, it’s an attack to the privacy, to the human rights, and so it’s difficult to balance on these critical issues, such as violent child pornography, and etc., and it’s not a new thing, right, because with these past years of talking about these issues, and there are like different views, and opposite directions, so thank you for touching on these issues to introduce, so let’s go to another area, that is the policy question number two, how can policymakers, that we have so policymakers here, how these policymakers and regulatory bodies design more flexible regulations. or that can adapt to rapid technological advancement as we were saying, but without compromising on some of the ethical standards and public safety. And here we are also touching on the ethical, right? So we can mention about bias, we can mention about copyright, I don’t know, different issues that are not on the same page as privacy but related to these emerging technologies. So Paola, now is your turn. What are the, in your opinion, the prevailing regulatory approaches to AI governance you have seen? And is there a particular model around all these models that are on the UNESCO framework and other maybe documents out there that stands out as the most conducive to encouraging these innovations? Thank you, Nicolas. Indeed,


Paola Galvez: there are different approaches, but let’s just to be all uniform in unifying in one idea, I will mention some approaches, but there are no exclusive, right? They can be a mix of it when the policymakers are deciding. And I would like to explain on five of them, this is not an exhaustive list, it could be more, but we’ve seen risk-based, human rights-based, principles-based, rules-based, and outcomes-based. These are what I’m about to explain to you, you can read more about in the report that is called AI Generative Governance from the WEF. It was published this year. So the risk-based approach is the most common one, we’ve seen it and actually the European Union, Europe adopted it. It really optimizes in terms of regulatory resources because it focuses their efforts on the areas with higher risks and minimize burdens on low-risk areas. Advantages it has, yeah, it allows regulatory frameworks to be flexible and adaptable when our circumstances are changing, but the challenges is risk assessment are complex. At the moment we see that the AI office is developing the guidelines, there is no one model on the risk assessment that should be done, so I’ve seen now in the market developing different ones, but how do we know, how can we be sure that this risk assessment is the correct one, right? Is there one? We’re still in that process. Second is the human rights-based approach, which should be, in my opinion, the best one. Why? Because with this technology we are seeing, and Nathalie mentioned it, you also, Nicolas, technology, this AI, artificial intelligence technologies are reproducing society’s bias, deepening inequalities, plus different other challenges. However, we cannot afford not being tech optimists. AI is the reality, is with us, and it holds tremendous promise. I do believe that when it is used wisely, it can really hold potential to hit SDGs and to help us be more efficient, and for no reason we won’t be replaced, at least from what I’ve seen now. But human rights are at stake, and this human rights-based approach means being grounded on international human rights law, which we already have. The advantage is the scope is not limited. In fact, AI system and the whole cycle of AI system must be under this regulation, should be developed and deployed in a way that respects and upholds human rights. There’s no doubt. What are the challenges of this approach? There is some complexity and opacity on AI system. We all know that these systems are called black boxes, and it also comes with the complexity that human rights protection are sometimes broadly worded, are hard to interpret sometimes, so we need a lawyer. years, specializing in international human rights law. And that’s what is lacking, in my opinion, in the discussion of these AI laws, because we are not really having these people on the table. And there is no example on this. At the moment of hard law, the Council of Europe AI Convention on AI, Human Rights, and the Rule of Law is the one that is putting human rights at the center. But that’s not mandatory. And well, we’ve seen that the adherence process is on the way. So let’s see how it goes. It sets basic principles and global standards on what we want in terms of AI. But it’s not the hard law that is applying in our countries. The principle base is the one that is adopting most of the countries. The US, with the Executive Order on Safe, Secure, and Trustworthy AI. Singapore. What is it? It sets out just fundamental principles, right? Fairness, accountability. And it is intended to foster innovation. So to your question, the principle-based approach could be the one that prevents the stifle of innovation, but protecting human rights in a sense with these principles. Fairness, right? No harm, but it’s not complete. Then rules-based is the Chinese approach, with the China Interim Measures for the Management of Generative AI as an example. It is very rigid, high compliance costs, but it lays out the detailed rules. So it really doesn’t leave much space for interpretation. That’s what is applying in China at the moment. And the outcomes-based is what the Japanese government is applying, because it focuses on achieving measurable AI-related outcomes. It also intends to foster innovation and compliance. But it has limited control over the process, because how can you really measure the outcomes? It can be very vague, right? I would just like to finish by saying that there is a risk to having a Brussels effect in terms of our Latin American countries trying to do what the European Union has done. And it is very important to say that our countries are not Europe. We don’t have the same context. So doing a copy-paste is not the solution. We can take best practices if they are already in place, but it’s very hard to see, really, the result of the implementation of the EU-AI Act, because it is still very new. And as you know, it’s in a phase process to be enforced. So we cannot tell at the moment. And also, I would say, if you guys can have a takeaway from what I’m explaining to you of these five approaches, let’s remember that any regulation that we want to approve in our countries, it must be under a public participation process, a meaningful one, meaning that sitting all the parties in the table and discussing what are their needs and how do they think they can be a solution. The readiness and methodology that I can tell you more later has this public consultation process and brings to the table the opinion of the public and the citizens. Thank you, Nicolás.


Nicolas Fiumarelli: Thank you, Paola, for some detailed analysis on the five different. approaches. I took some notes there, so we will have a good takeaways on that, and also on some noticeable comments you made on each of those. So, on the same line, I will ask now Ananda, what are global examples of flexible AI governance that can inspire policymakers nowadays?


Ananda Gautam: So, I’d reflect on something because our title is over-regulation and balancing the innovation. So, let’s go back to 1970s. If internet was regulated at the beginning, in the first three decades before the WECS started, we couldn’t have this internet, you know, like today we are using. We couldn’t have been talking or discussing about internet governance anymore. So, regulation is not always the best way of what we call governance, and one of the, like Nicholas asked me what is the best example, maybe UNESCO. So, AI ethics framework is one of the greatest examples, which has been endorsed by, I think, more than 90 countries, 100 now, I think, countries, and then like WECS has been working on second version of their ethical AI concept. So, these are like how principles can bind people rather than legislation. So, I would, if you ask me, is it legislation or policy, I would go for policy-based approach, which would actually harness the power of AI rather than like regulating it and, like we call it over-regulation. So, policy would be something that could actually promote the businesses without undermining the human rights. Like Paula mentioned, the human rights-based approach is the must, and also I reflect on what Nathalie said, like while AI is being developed, there will be both sides, you know, like pros and cons. If we take an example of cybersecurity, scammers are using AI to actually manipulate the people and then like phishing attacks have been into another level with the use of AI. At the same time, cybersecurity tools are being developed, which use AI technologies to detect the patterns of the cybersecurity faster, and then like to, at some times, automated countermeasures are applied right now. There are many tools developed by Palo Alto and other leading cybersecurity companies that employ AI to detect those technologies. So, these are some kind of things. I think it is in the very premature development stage. we have just seen the power of generative AI when like chat GPT exploded and there are so many GPTs available and we have only seen the power of generative AI. One thing is like while people are using AI, we should be very clear in terms of the legislation or policy that how it will be used by public. Like today school children are using AI or like chat GPT or something generative AI to add on their knowledge. Will it give the right knowledge or not that is very crucial. So these kind of considerations are very important which needs to be considered and it is also covered by different frameworks that are being developed but in a according to the national context also we need to have how people will leverage that thing. Like if something is generated by AI, how do people distinguish those things. Maybe we can call it AI literacy. You know people need to know what they are using, what are the consequences, are the things they need to be good enough to distinguish between what is being generated by AI, what is original and I think that are the baselines you know that we need to focus on. I’ll stop it here.


Nicolas Fiumarelli: I like your ideas Ananda but I think that recognizing if something is AI generated nowadays is more complex than everything that you can take off. So also on the complexity of these approaches like because we want to balance flexibility, enforceability, practicality. So it’s like risk-based methods demands nuanced assessments. Human rights-based approaches face challenges with opacity and legal gaps. Principles-based frameworks often lack enforceable mechanisms. So we have problems with each of them. approach is also the outcome-based model emphasize modern measurement, but may struggle, as Paola say, with contextual adaption and particularly in diverse regions, as you say, in Nepal, for example. So together, these approaches highlight, I think, a more multidisciplinary collaboration and tailored strategies, right, to address AI multifaceted risk and opportunities effectively. So going now for the online, James, we would like to see your face. From your experience, how has regulatory flexibility impacted on the African innovations, in your opinion? James, you’re muted.


James Nathan Adjartey Amattey: Okay, sorry. Yeah, so thank you very much, Wood. I think your question is very interesting because we’ll take COVID as an example. COVID, you know, really, I’ll say COVID catalyzed, or should I say highlighted the need for innovation over regulation. So during COVID, there was little regulation. It was all innovation. And together with that innovation, we’re able to control the cost, the spread, and, you know, the, should I say, you know, the lifestyle change that came with COVID, right? So what we want to do is that we do not want a case where it is just emergencies that allow us to be flexible with loss, but we want to adopt a lifestyle of, you know, having that flexibility, but keeping, you know, keeping guard or being on watch, right? So it’s like having a security man. You do not hope. that a thief attacks you, but he’s there for when the thief comes, right? So it’s, it’s just like that. So I like the idea of policies over regulations. So frameworks, you know, constructive ways of doing things that could, you know, guide people on how to do it properly versus inhibiting what they can and what they cannot do. Of course, there are certain times when you can do that, but as we are currently in the experimental phase of innovation, especially with AI, it’s very tantamount that we allow it to spread its wings for us to know what is possible and what is not. In the African context, COVID really allowed, so for example, we had the use of autonomous drones that were delivering COVID shots. They were delivering POPs. They were delivering face masks to remote organizations. We had trackers that way, that were used to identify hotspots of COVID and be able to, you know, design responses for them. And these are several other ways. We’ve also had issues of flooding in Ghana, where most of the work I do, especially in open data, has helped us to use AI to identify roads, to be able to help relieve, get to victims of dam spillage in 2023. And we’ve done a lot of work around public health and correlation of health data using mobile apps. And all of these things have been possible. you know, through innovation. So I think innovation and regulation should be teammates rather than, you know, trying to be competitors of who is right and who is superior. I think we should work and collaborate more. And, you know, innovation should not be an afterthought and regulation should not be an afterthought, but rather we could build, you know, these frameworks into innovation pipelines and into our regulatory pipelines. Thank you very much.


Nicolas Fiumarelli: Thank you so much, James. Due to the time constraints we are reaching to the end of our session, I will do a condensed question for all the panelists. So you have one minute each for answering. How can we successful, how can successful examples of AI applications, and international frameworks we were talking, could inform regulatory strategy that balance strategic innovation, safeguarding employment, for example, as an issue addressing societal impacts such as job displacements and critical needs like the healthcare, industrial automation. So may we start with Paola, growing from your experience leading the UNESCO AI run? Yes.


Paola Galvez: So the question was very long. So I’ll do a wrap up, just answering. First, think about local needs. What are the regulations that we have in place? And how can we complement them? Sometimes, and I think this is a personal opinion, we need an umbrella regulation, like the AI act and AI, not guidelines, but it must be mandatory. Why is it? Because the country needs to have a position. What the country wants the AI to do and to be for their citizens. What’s the position of the country in terms of legal autonomous weapons? I think that should be a yes or no, right? So that could be mandatory, and that’s a prohibition. vision or not. Surveillance, right? Are we using AI to safety and security? But let’s be mindful of that it can target people from migration or other communities that are vulnerable or minorities. So it’s very important when we’re using that. And that is taking a position. That means regulation. That’s law. In some other position, please, let’s invest in capacity development. Digital skills is key in terms of using AI because we will never be able as a country to leverage all the potential of this technology if we don’t help our citizens understand it or use it as it best. Thank you. This is very condensed, but happy to speak later.


Nicolas Fiumarelli: Thank you so much, Paola. Maybe Natalie, if you want to make one minute contribution on the health care part that you are the expert, please.


Natalie Tercova: Of course, I’ll try to be very brief. So I very much agree that it very depends on the specific case. We sometimes have discussions about, OK, what we should do in health care. But this is such a broad concept. And let’s say that ethical considerations such as patients privacy or the data protection of patients and minimizing the bias in algorithms when it comes to treatment and health care are really, from my perspective, non-negotiables. And definitely, we have to take these into account when we talk about health care. However, when we talk about, let’s say, diagnostic tools that can assist the doctors with some critical conditions, well, this carries way higher risks and higher risk stakes than, for instance, AI tools or other technologies used for administrative scheduling system. For instance, how we set a timeline for certain operations and stuff. So again, it is so broad. And we have to take into consideration the level of risks that is involved in this thing. So in light of this, I believe that those. high risk applications should undergo maybe more rigorous review before they come into practice while those low risk innovations can proceed under lighter regulatory requirements and then we can really grow and focus more on the innovation and make these things faster and more effective. So again it’s about balance and I don’t want to dive into more details but I’m of course happy to talk about it more because we recently conducted a robust research in central Europe about the AI usage in the healthcare we were also asking people whether they use it for their own questions for instance if they ask CGPT about oh I have this issue this is bothering me and if they trust what the AI is telling them because one thing is the usage and people can be just experimenting with the thing and you know just overall excited about these opportunities but they are mindful that sometimes what it is recommended to them is not the best. So we have some very I would say interesting insights and I’m happy to talk more about this also over


Nicolas Fiumarelli: coffee. Thank you. Thank you Nathalie. So thank you so much Nathalie. So yes we have only one hour session so you can reach Nathalie at the coffee and continue that conversation. Going for Osei do we have any question online and maybe we have one question for the on-site the first raise the hand you can make it okay. Okay so it’s not really a question this is a


Osei Manu Kagyah: suggestion from someone he talked about human rights being at the core of the conversation as said by the UN but I will have a question for all of us to mull over. I think the initiation or say conception of these policy questions it starts from how that lack of trust between multi-stakeholder the various multi-stakeholders and a good example is argument a this is an argument a I won’t ask debate about this the UK secretary of state for science and technology Peter Kyle argues that tech companies or say companies, should be treated like states because their investments in innovation exceed that of governments. Argument B was some few weeks back where a former Dutch member of the European Parliament argued that focus should be to strengthen the primacy of democratic governance and oversight and not to show humility. She argued further to highlight the need for self-confidence on the part of democratic government to make sure that these companies, these services, are taking their proper role within a rule of law-based system and are not overtaking it. Obviously, I think argument B sounds persuasive, but then how do we ensure stakeholders that do have a say in there? So the conception or the initiation of all these policy conversations is the lack of trust I have noticed. But if you have any question on-site, please do raise your hand. Please be snappy because we have privacy as I speak.


Audience: Yes, thank you so much. My name is Agustina, I’m from Argentina, and I have a quick question. So I see the… Sorry. So when regulating AI or technology, what I found was like we have like different layers or aspects. So for instance, we have the users, we have the developers, we have regarding AI, the training of this AI. So in this sense, I see that users are already, if you want to say like in the physical world, punished by the law. But on the other side, like do you think that, for instance, companies should be responsible for what they develop or those who train the models should also be responsible for the effects that this has or not?


Nicolas Fiumarelli: Maybe some of the panelists want to… answer the question, and we go to the last question here, right? Okay. Who wants to answer the question? Or we go directly to the next question.


Audience: Thank you, Moderator. Mine is on the topic of this discussion. Have we really reached the stage of overregulation, given that with the advent of ICTs and now with the coming in of emerging technologies, we have seen regulation playing catch up. It’s usually ahead of regulation. So have we reached the stage of overregulation yet? Do you have a question as well? That will be the last panelist.


James Nathan Adjartey Amattey: Yes, I think I can answer this a bit. Okay, James, you can answer and then wrap up. Yes. So yes, there is a risk of innovation going ahead of regulation, but it all boils down to AI literacy. And we totally come into an understanding of what AI really is and what AI is not, right? Because for example, if you ask a lot of people, what is AI? Most of them will most of them will answer chat GPT, but chat GPT is just one use case of AI. It’s not AI enough in it of itself, right? So we need to be able to build literacy programs for regulators, for developers, for users, for us to be able to have an understanding of what are intersecting interests are. And then we can be able to then look at those intersective interests and then be able to now tailor AI solutions to our personal use cases. So most of my work for next year will literally fall around AI literacy and building literacy programs to help understand, help, should I say, proliferate that knowledge of what AI truly is and what AI is not, what it can do, what it should do, what it should be allowed to do. And then we can take the rest from there. Thank you very much. Happy to connect online. If, yes, my name is James. You can find me on LinkedIn.


Nicolas Fiumarelli: OK, thank you for your time, James, and your valuable contribution. Thank you, everyone, for the engaging discussion. Sorry for the ones that were in the queue. We are six minutes out of time. Today, we explored these critical aspects of AI regulation and innovation, drawing some insight from diverse regions and sectors, as you have seen. So a special thanks to our panelists here and their valuable contribution and also our audience for your active participation. Thank you, and enjoy the rest of the AGF. Thank you, online audience. We might take a photo on the front, if you want. Come, everybody. I think we’re good. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. Great. OK. Great. Great. Great. Great. OK. OK. OK. OK. OK. OK. OK. OK. There. you you you you


P

Paola Galvez

Speech speed

144 words per minute

Speech length

1840 words

Speech time

764 seconds

Regulation is necessary but should not stifle innovation

Explanation

Paola Galvez argues that regulation is needed, but the focus should be on how to regulate rather than whether to regulate. She emphasizes the importance of balancing regulatory safeguards with fostering innovation.


Evidence

Paola mentions different regulatory approaches such as risk-based, human rights-based, principles-based, rules-based, and outcomes-based.


Major Discussion Point

Balancing AI Regulation and Innovation


Agreed with

Ananda Gautam


James Nathan Adjartey Amattey


Agreed on

Balancing regulation and innovation


Differed with

Ananda Gautam


Differed on

Approach to AI regulation


Human rights-based approaches are crucial but face implementation challenges

Explanation

Galvez emphasizes the importance of human rights-based approaches in AI regulation. However, she notes that these approaches face challenges due to the complexity and opacity of AI systems, as well as the broad wording of human rights protections.


Evidence

She mentions the Council of Europe AI Convention on AI, Human Rights, and the Rule of Law as an example of a human rights-based approach.


Major Discussion Point

Addressing AI Risks and Ethical Concerns


Copy-pasting EU regulations is not appropriate for developing countries

Explanation

Galvez warns against the ‘Brussels effect’ where Latin American countries might try to copy EU regulations. She emphasizes that context matters and that developing countries have different needs and circumstances compared to Europe.


Major Discussion Point

Developing Context-Appropriate AI Governance


Public participation is crucial in developing AI policies

Explanation

Galvez stresses the importance of involving all stakeholders in the development of AI policies. She argues for a meaningful public participation process that includes all parties in discussions about their needs and potential solutions.


Evidence

She mentions the UNESCO AI Readiness Assessment Methodology as an example of a process that includes public consultation.


Major Discussion Point

Developing Context-Appropriate AI Governance


Digital skills development is key to leveraging AI’s potential

Explanation

Galvez emphasizes the importance of investing in capacity development and digital skills. She argues that countries cannot leverage the full potential of AI technology without helping citizens understand and use it effectively.


Major Discussion Point

Building AI Capacity and Literacy


Agreed with

Ananda Gautam


James Nathan Adjartey Amattey


Agreed on

Importance of AI literacy and capacity building


Local needs and existing regulations should inform AI governance

Explanation

Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance frameworks. She argues for complementing existing regulations rather than creating entirely new ones.


Evidence

She mentions the need for an umbrella regulation that defines a country’s position on key AI issues like autonomous weapons and surveillance.


Major Discussion Point

Developing Context-Appropriate AI Governance


A

Ananda Gautam

Speech speed

139 words per minute

Speech length

873 words

Speech time

374 seconds

Flexible, principle-based approaches can foster innovation while protecting rights

Explanation

Gautam advocates for policy-based approaches over strict legislation. He argues that this approach can promote business without undermining human rights and allows for more flexibility in governance.


Evidence

He cites the UNESCO AI ethics framework and the WECS ethical AI concept as examples of principle-based approaches.


Major Discussion Point

Balancing AI Regulation and Innovation


Agreed with

Paola Galvez


James Nathan Adjartey Amattey


Agreed on

Balancing regulation and innovation


Differed with

Paola Galvez


Differed on

Approach to AI regulation


AI divide between countries needs to be addressed

Explanation

Gautam highlights the growing AI divide between countries, particularly affecting developing nations. He emphasizes the need to build capacity in these nations to leverage AI’s power effectively.


Major Discussion Point

Developing Context-Appropriate AI Governance


Capacity building in developing nations is crucial

Explanation

Gautam stresses the importance of building capacity in developing nations to engage in AI governance processes. He argues that this is necessary for these countries to develop comprehensive AI policies that leverage AI’s power effectively.


Major Discussion Point

Building AI Capacity and Literacy


Public understanding of AI-generated content is important

Explanation

Gautam emphasizes the need for public understanding of AI-generated content. He argues that people need to be able to distinguish between AI-generated and original content, which he refers to as AI literacy.


Evidence

He mentions the use of AI by school children and the need for them to understand if they are getting the right knowledge.


Major Discussion Point

Building AI Capacity and Literacy


Agreed with

Paola Galvez


James Nathan Adjartey Amattey


Agreed on

Importance of AI literacy and capacity building


J

James Nathan Adjartey Amattey

Speech speed

129 words per minute

Speech length

1561 words

Speech time

723 seconds

COVID-19 highlighted need for innovation over rigid regulation

Explanation

Amattey uses the COVID-19 pandemic as an example of how innovation can thrive with less regulation in times of crisis. He argues for maintaining this flexibility in normal times, balancing innovation with necessary safeguards.


Evidence

He cites examples of autonomous drones delivering COVID supplies and AI-powered trackers identifying hotspots during the pandemic.


Major Discussion Point

Balancing AI Regulation and Innovation


Agreed with

Paola Galvez


Ananda Gautam


Agreed on

Balancing regulation and innovation


AI literacy is needed to understand risks and benefits

Explanation

Amattey emphasizes the importance of AI literacy for regulators, developers, and users. He argues that understanding what AI is and isn’t is crucial for tailoring AI solutions to specific use cases and addressing intersecting interests.


Evidence

He mentions his future work will focus on building AI literacy programs.


Major Discussion Point

Building AI Capacity and Literacy


Agreed with

Paola Galvez


Ananda Gautam


Agreed on

Importance of AI literacy and capacity building


N

Natalie Tercova

Speech speed

159 words per minute

Speech length

1312 words

Speech time

492 seconds

AI can be used to both create and detect child sexual abuse material

Explanation

Tercova discusses the dual role of AI in relation to child sexual abuse material (CSAM). She highlights how AI can be used to create deepfake CSAM, but also how it can be used to detect and combat such material.


Evidence

She cites statistics on the prevalence of CSAM, mentioning 50,000 websites hosting CSAM in 2023 and 85 million reported materials in 2021.


Major Discussion Point

Addressing AI Risks and Ethical Concerns


Risk-based regulation allows focus on high-risk AI applications

Explanation

Tercova advocates for a risk-based approach to AI regulation in healthcare. She argues that high-risk applications should undergo more rigorous review, while low-risk innovations can proceed under lighter regulatory requirements.


Evidence

She contrasts high-risk diagnostic tools with lower-risk administrative scheduling systems in healthcare.


Major Discussion Point

Balancing AI Regulation and Innovation


Patient privacy and data protection are non-negotiable in healthcare AI

Explanation

Tercova emphasizes that patient privacy, data protection, and minimizing bias in algorithms are non-negotiable aspects of AI use in healthcare. She argues that these ethical considerations must be taken into account regardless of the specific application.


Major Discussion Point

Addressing AI Risks and Ethical Concerns


Agreements

Agreement Points

Balancing regulation and innovation

speakers

Paola Galvez


Ananda Gautam


James Nathan Adjartey Amattey


arguments

Regulation is necessary but should not stifle innovation


Flexible, principle-based approaches can foster innovation while protecting rights


COVID-19 highlighted need for innovation over rigid regulation


summary

The speakers agree that while regulation is necessary, it should be flexible enough to allow for innovation. They advocate for approaches that balance safeguards with the ability to innovate.


Importance of AI literacy and capacity building

speakers

Paola Galvez


Ananda Gautam


James Nathan Adjartey Amattey


arguments

Digital skills development is key to leveraging AI’s potential


Public understanding of AI-generated content is important


AI literacy is needed to understand risks and benefits


summary

The speakers emphasize the crucial role of AI literacy and capacity building in enabling effective use and governance of AI technologies.


Similar Viewpoints

Both speakers stress the importance of considering local context and needs when developing AI governance frameworks, particularly for developing nations.

speakers

Paola Galvez


Ananda Gautam


arguments

Local needs and existing regulations should inform AI governance


Capacity building in developing nations is crucial


Both speakers emphasize the importance of protecting human rights and privacy in AI applications, while acknowledging the challenges in implementing these protections.

speakers

Natalie Tercova


Paola Galvez


arguments

Patient privacy and data protection are non-negotiable in healthcare AI


Human rights-based approaches are crucial but face implementation challenges


Unexpected Consensus

Risk-based approach to AI regulation

speakers

Paola Galvez


Natalie Tercova


arguments

Regulation is necessary but should not stifle innovation


Risk-based regulation allows focus on high-risk AI applications


explanation

Despite coming from different backgrounds (policy and healthcare), both speakers advocate for a risk-based approach to AI regulation, suggesting a broader consensus on this strategy across sectors.


Overall Assessment

Summary

The main areas of agreement include the need for balanced regulation that doesn’t stifle innovation, the importance of AI literacy and capacity building, and the necessity of considering local contexts in AI governance.


Consensus level

There is a moderate to high level of consensus among the speakers on key issues. This suggests a growing recognition of common challenges and potential solutions in AI governance across different sectors and regions. However, the specific implementation details and priorities may still vary, indicating the need for continued dialogue and collaboration.


Differences

Different Viewpoints

Approach to AI regulation

speakers

Paola Galvez


Ananda Gautam


arguments

Regulation is necessary but should not stifle innovation


Flexible, principle-based approaches can foster innovation while protecting rights


summary

While both speakers emphasize the importance of balancing regulation and innovation, Galvez argues for a more structured regulatory approach, while Gautam advocates for a more flexible, principle-based approach.


Unexpected Differences

Role of COVID-19 in shaping AI regulation

speakers

James Nathan Adjartey Amattey


Paola Galvez


arguments

COVID-19 highlighted need for innovation over rigid regulation


Copy-pasting EU regulations is not appropriate for developing countries


explanation

While not directly contradictory, these arguments present an unexpected difference in perspective on how crises and external influences should shape AI regulation in developing countries.


Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI regulation, the balance between innovation and safeguards, and the consideration of local contexts in developing AI governance frameworks.


difference_level

The level of disagreement among the speakers is moderate. While there are differing perspectives on specific approaches to AI regulation, there is a general consensus on the need for balanced governance that protects rights while fostering innovation. These differences highlight the complexity of developing effective AI governance frameworks that can address diverse global needs and contexts.


Partial Agreements

Partial Agreements

Both speakers agree on the importance of human rights and privacy in AI regulation, but they differ in their focus. Galvez discusses broader human rights challenges, while Tercova emphasizes specific healthcare-related privacy concerns.

speakers

Paola Galvez


Natalie Tercova


arguments

Human rights-based approaches are crucial but face implementation challenges


Patient privacy and data protection are non-negotiable in healthcare AI


Similar Viewpoints

Both speakers stress the importance of considering local context and needs when developing AI governance frameworks, particularly for developing nations.

speakers

Paola Galvez


Ananda Gautam


arguments

Local needs and existing regulations should inform AI governance


Capacity building in developing nations is crucial


Both speakers emphasize the importance of protecting human rights and privacy in AI applications, while acknowledging the challenges in implementing these protections.

speakers

Natalie Tercova


Paola Galvez


arguments

Patient privacy and data protection are non-negotiable in healthcare AI


Human rights-based approaches are crucial but face implementation challenges


Takeaways

Key Takeaways

AI regulation is necessary but should be flexible to avoid stifling innovation


A human rights-based approach to AI governance is crucial but faces implementation challenges


Context-appropriate AI governance is needed, especially for developing countries


Building AI literacy and capacity across stakeholders is essential


Risk-based approaches can help focus regulation on high-risk AI applications while allowing innovation in lower-risk areas


Public participation and multi-stakeholder collaboration are important in developing AI policies


Resolutions and Action Items

Develop AI literacy programs for regulators, developers, and users


Invest in digital skills development to leverage AI’s potential


Consider local needs and existing regulations when developing AI governance frameworks


Implement rigorous review processes for high-risk AI applications in healthcare


Unresolved Issues

How to effectively balance privacy and safety in AI-powered content moderation


The extent of responsibility for AI developers and companies for the effects of their technologies


Whether the current state of AI regulation constitutes over-regulation or under-regulation


How to address the growing AI divide between developed and developing countries


Suggested Compromises

Adopt principle-based frameworks to provide guidance without rigid rules


Implement lighter regulatory requirements for low-risk AI innovations


Use policies and guidelines instead of strict legislation where possible


Balance innovation and regulation by treating them as collaborative rather than competitive forces


Thought Provoking Comments

I don’t think the question is anymore whether to regulate or not to regulate. We’re past beyond that. It’s my opinion. And what we’re working now is on how to regulate, right?

speaker

Paola Galvez


reason

This comment shifts the framing of the discussion from debating regulation itself to focusing on implementation approaches. It challenges the premise of the session title and sets a new direction.


impact

This reframing influenced subsequent speakers to focus more on specific regulatory approaches and implementation challenges rather than debating regulation in principle.


There are certain regulatory frameworks that need to happen for innovation to come to the forefront.

speaker

James Nathan Adjartey Amattey


reason

This insight highlights the complex relationship between regulation and innovation, suggesting they can be complementary rather than opposed.


impact

It prompted discussion of specific examples where regulation enabled or catalyzed innovation, adding nuance to the debate.


We have to take into consideration the level of risks that is involved in this thing. So in light of this, I believe that those high risk applications should undergo maybe more rigorous review before they come into practice while those low risk innovations can proceed under lighter regulatory requirements

speaker

Natalie Tercova


reason

This comment introduces a nuanced, risk-based approach to regulation that balances innovation and safety concerns.


impact

It shifted the conversation towards more granular considerations of how to tailor regulatory approaches to different AI applications and risk levels.


There are different approaches, but let’s just to be all uniform in unifying in one idea, I will mention some approaches, but there are no exclusive, right? They can be a mix of it when the policymakers are deciding.

speaker

Paola Galvez


reason

This insight highlights the complexity of AI governance and the potential for hybrid regulatory approaches.


impact

It led to a more detailed discussion of various regulatory models (risk-based, human rights-based, principles-based, etc.) and their respective strengths and weaknesses.


If internet was regulated at the beginning, in the first three decades before the WECS started, we couldn’t have this internet, you know, like today we are using. We couldn’t have been talking or discussing about internet governance anymore.

speaker

Ananda Gautam


reason

This historical perspective provides a cautionary tale about over-regulation stifling innovation.


impact

It prompted reflection on balancing regulation with allowing space for technological development and innovation.


Overall Assessment

These key comments shaped the discussion by moving it from a binary debate about regulation vs. non-regulation to a more nuanced exploration of different regulatory approaches, their impacts on innovation, and the need to balance multiple concerns including safety, ethics, and technological progress. The discussion evolved to consider risk-based frameworks, the role of soft law and principles, and the importance of context-specific approaches tailored to different applications and regions. There was a general consensus that some form of governance is necessary, but disagreement on the exact form it should take and how to implement it effectively without stifling innovation.


Follow-up Questions

How can we find a balance between privacy and safety when using AI to detect child sexual abuse material (CSAM)?

speaker

Natalie Tercova


explanation

This is a crucial issue as it involves the tension between protecting children and maintaining individual privacy rights.


How can we ensure that risk assessments for AI systems are accurate and reliable?

speaker

Paola Galvez


explanation

This is important for implementing effective risk-based regulatory approaches to AI governance.


How can we improve AI literacy among policymakers, developers, and users?

speaker

James Nathan Adjartey Amattey


explanation

This is crucial for informed decision-making and effective regulation of AI technologies.


How can we design regulatory frameworks that are flexible enough to adapt to rapid technological advancements?

speaker

Nicolas Fiumarelli


explanation

This is important to ensure regulations remain relevant and effective as AI technology evolves.


How can we balance the need for innovation with the protection of human rights in AI development and deployment?

speaker

Paola Galvez and Ananda Gautam


explanation

This is critical for ensuring AI benefits society while minimizing potential harms.


How can developing nations build comprehensive AI policies that leverage the power of AI while addressing their specific challenges?

speaker

Ananda Gautam


explanation

This is important for ensuring equitable global development of AI technologies and policies.


How can we distinguish between AI-generated and human-generated content, and what are the implications for AI literacy?

speaker

Ananda Gautam


explanation

This is crucial for addressing potential misuse of AI and ensuring informed consumption of information.


How can we design AI regulations that take into account local needs and existing regulatory frameworks?

speaker

Paola Galvez


explanation

This is important for creating effective and context-appropriate AI governance.


How should responsibility be allocated among AI developers, companies, and users for the effects of AI systems?

speaker

Audience member (Agustina from Argentina)


explanation

This is crucial for establishing accountability in AI development and use.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.