Open Forum #38 Harnessing AI innovation while respecting privacy rights
Open Forum #38 Harnessing AI innovation while respecting privacy rights
Session at a Glance
Summary
This panel discussion focused on the intersection of AI innovation and privacy protection, exploring challenges and potential solutions in AI governance. Experts from various fields, including government, academia, and regulatory bodies, shared insights on balancing technological advancement with privacy rights.
The discussion highlighted the OECD’s recent work in updating AI principles and establishing a partnership with the Global Partnership on AI. Panelists emphasized the importance of a comprehensive approach to AI governance, considering privacy alongside other values such as fairness, transparency, and human agency. They noted the challenges in balancing these sometimes conflicting priorities, particularly when dealing with human rights that cannot be traded off.
Privacy concerns were examined across the AI lifecycle, from data collection to model deployment and retirement. The experts stressed the need for age-appropriate design in AI systems, especially concerning children’s data protection. The conversation also touched on the convergence of AI with other technologies like blockchain and neurotechnology, highlighting the complexity of privacy protection in a rapidly evolving technological landscape.
Panelists discussed the role of data protection authorities in developing practical approaches to safeguard privacy while fostering innovation. They emphasized the importance of global governance frameworks and the need to translate principles into enforceable actions. The discussion concluded with calls for strengthened legal frameworks, increased transparency, and greater involvement of civil society in AI and privacy-related policymaking.
Overall, the panel underscored the critical nature of privacy protection in AI development and deployment, advocating for a balanced approach that considers both innovation and human rights.
Keypoints
Major discussion points:
– The intersection of AI and privacy, including challenges and risks
– The need for global governance frameworks and cooperation on AI and privacy
– The AI lifecycle and how privacy considerations apply at each stage
– The role of data protection authorities in regulating AI and privacy
– Balancing innovation with privacy protection in AI development
Overall purpose:
The goal of this discussion was to explore the complex relationship between AI and privacy, examining key challenges, policy approaches, and potential solutions for protecting privacy rights while fostering responsible AI innovation. The panel aimed to bring together diverse perspectives from government, academia, technical experts, and regulators to have a comprehensive dialogue on this important issue.
Tone:
The overall tone was informative and collaborative. Speakers shared insights from their respective areas of expertise in a constructive manner. There was a sense of urgency about addressing privacy challenges, but also optimism about finding solutions through cooperation. The tone became slightly more impassioned toward the end as audience members raised additional concerns, but remained respectful and solution-oriented throughout.
Speakers
– Lucia Russo: Moderator
– Juraj Čorba: Senior expert for digital regulation and governance from the Slovak Ministry of Informatization; Chair of the OECD AI working on official investment governance; Chair of the Global Partnership on AI
– Clara Neppel: Senior director at IEEE; Co-chair of the OECD expert group on AI data and privacy
– Thiago Guimarães Moraes: Specialist on AI governance and data protection at the Brazilian Data Protection Authority
– Jimena Viveros: Member of the UN Secretary General’s High-Level Advisory Body on AI, Managing Director and CEO of IQuilibriumAI
Full session report
AI Innovation and Privacy Protection: Challenges and Solutions in Governance
This panel discussion brought together experts from government, academia, and regulatory bodies to explore the complex intersection of AI innovation and privacy protection. The conversation highlighted key challenges in AI governance and potential solutions for safeguarding privacy rights while fostering responsible technological advancement.
Key Themes and Challenges
1. Privacy Concerns in Advanced AI Systems
The panelists unanimously agreed that advanced AI systems pose significant privacy challenges due to their extensive data requirements. Juraj Čorba, representing the Slovak Ministry of Informatisation and the OECD, emphasized that AI’s dependence on data inherently creates privacy issues. Clara Neppel from IEEE noted that generative AI exacerbates these concerns through vast data collection and potential re-identification of individuals.
Specific examples of privacy challenges included:
– The potential for AI systems to infer sensitive information from seemingly innocuous data
– Risks of re-identification in anonymized datasets
– Challenges in obtaining meaningful consent for data use in complex AI systems
Thiago Guimarães Moraes, from the Brazilian Data Protection Authority, highlighted the complex trade-offs between privacy, fairness, and utility in AI systems. Jimena Viveros, a member of the UN Secretary General’s High-Level Advisory Board on AI, expanded on this, noting that AI data collection and use can have far-reaching effects on democratic institutions and geopolitics.
2. Global Governance and Regulatory Frameworks
There was strong consensus on the need for global governance frameworks and harmonized regulations to address the transboundary nature of data and AI-related privacy challenges. Čorba mentioned that the OECD has updated its AI principles and definition to reflect technological developments and privacy concerns. He also highlighted the relevance of the UN Digital Compact in relation to AI governance.
Viveros advocated for UN recommendations aimed at creating a global AI data framework to protect human rights. She also proposed recognizing data as a “digital public good,” sparking discussion about new approaches to data governance in the AI era.
Moraes highlighted the role of data protection authorities in developing guidance and regulatory sandboxes to address AI privacy issues. He emphasized their work in:
– Providing technical assistance to organizations implementing AI
– Developing guidelines for privacy-enhancing technologies
– Collaborating with other regulatory bodies to address cross-cutting issues
3. Balancing Innovation and Privacy Protection
A key point of discussion was the challenge of balancing AI innovation with privacy protection. Neppel stressed the importance of weighing the economic benefits of AI against privacy risks. She introduced the concept of the AI lifecycle and its implications for privacy, noting that privacy considerations must be integrated at every stage of AI development and deployment.
Moraes emphasized the need for privacy-enhancing technologies and techniques like differential privacy. However, he argued that from a human rights perspective, privacy and other fundamental rights cannot be compromised or traded off, stating, “Human rights cannot be traded off. And that’s here one of the main challenges. We are talking about trade-off of values in a technical level that they cannot mean undermining of human rights.”
4. Intersections with Other Technologies
The discussion highlighted the importance of considering AI privacy issues within the broader context of emerging technologies. Čorba noted that the convergence of AI with technologies like blockchain and neurotechnology creates new privacy challenges. He stressed the need to consider the full “digital stack” when addressing AI and privacy governance.
An audience member raised the specific issue of blockchain’s immutability and its implications for data privacy. The panelists acknowledged the challenges this poses, particularly concerning data deletion rights and the right to be forgotten.
Key Solutions and Recommendations
1. Age-Appropriate Design and Children’s Data Protection
Neppel emphasized the crucial importance of age-appropriate design in AI systems, particularly concerning children’s data protection. She highlighted the need for special safeguards and considerations when AI systems interact with or process data from minors.
2. Privacy-Enhancing Technologies and Techniques
The panelists discussed various technical approaches to enhancing privacy in AI systems. Differential privacy was highlighted as a potential technique to balance data utility with privacy protection. Moraes stressed the importance of these technologies in practical implementation of privacy principles.
3. Global Cooperation and Harmonized Regulations
There was strong agreement on the need for international cooperation in developing AI governance frameworks. The speakers advocated for harmonized regulations and the adoption of international AI governance standards at national levels. Čorba mentioned the OECD’s expert group on AI, data, and privacy as an example of ongoing international efforts.
4. Strengthening Legal Frameworks
The discussion concluded with calls for strengthened legal frameworks to ensure effective privacy protection in the age of AI. This includes updating legislation to keep pace with technological advancements and raising public awareness about AI and privacy issues.
Thought-Provoking Insights
Jimena Viveros provided a particularly impactful perspective, stating, “AI is data, so we cannot have AI without data. And data comes with privacy issues, that’s just a problem.” This succinctly captured the fundamental tension at the heart of the discussion.
Thiago Guimarães Moraes emphasized the non-negotiable nature of human rights in AI development, highlighting the challenge of balancing technical trade-offs without compromising fundamental rights.
Conclusion
The panel discussion underscored the critical importance of addressing privacy concerns in AI development and deployment. While there was broad agreement on the challenges and the need for global cooperation, the conversation revealed the complexity of balancing innovation, economic benefits, and fundamental rights protection.
Key takeaways included:
– The need for privacy considerations throughout the AI lifecycle
– The importance of international collaboration in developing governance frameworks
– The role of data protection authorities in guiding responsible AI implementation
– The potential of privacy-enhancing technologies in addressing AI privacy challenges
As AI continues to advance, ongoing dialogue and collaborative efforts will be crucial in developing effective governance frameworks that safeguard privacy while fostering responsible technological progress. The discussion highlighted that while technical solutions are important, they must be underpinned by strong legal frameworks and a commitment to protecting fundamental human rights in the digital age.
Session Transcript
Lucia Russo: organized by the OECD on how to harness AI innovation while protecting privacy rights and exactly this is the very focus of this panel today and it’s a concern that has been heightened by recent developments in the technology and the OECD recommendation in its revision earlier this year has evolved to reflect the evolving technological landscape increased challenges raised by advanced AI systems include privacy rights so in our discussions today we would like to navigate these three main aspects the privacy challenges in the advanced AI systems and the policy landscape for AI governance and in relation with privacy and how to develop practical forward-looking solutions. I am joined for this discussion today by an exceptional panel of experts who bring diverse perspectives on AI governance spanning from government policy technical community academia and regulators and so I would like to join and to welcome today Juraj Korba senior experts for digital regulation and governance from the Slovak Ministry of Informatization and Juraj is the chair of the OECD AI working on official investment governance and chair of the global partnership on AI. We have Clara Neppel senior director at IEEE and co-chair of the OECD expert group on AI data and privacy and Tiago Guimarães Moraes specialist on AI governance and data protection at the Brazilian Data Protection Authority. We will also have Ximena Viveiros joining us I believe a little later and she’s a member of the UN secretary general’s high-level advisory body on AI. So the way this panel will unfold will be to have our speakers bring their perspectives around this topic and then we will also have time for a discussion with the audience both here and then online we are monitoring the chat so we will give voice to those who have questions online. So I will now start with Juraj and there should be some slides on the screen. So Juraj as the chair of the working party on AI governance played a key role in guiding the discussions that have led to decisions of OECD recommendation on AI. Motivations behind updating the OECD recommendation and also tell us what were the primary costs that were advanced AI systems and how these affect those.
Juraj Äorba: One, two, three, do you hear me please? If you could change please my machine I’m afraid it’s not properly. Mike, sounds like Mike, right thanks. One, two, three. Oh, this is better now, I hope, or not really. Is it better? One, two, three. But anyway, at least you hear me. So first of all, I would like to thank the organizers for providing again an opportunity for the international organizations to share the latest results of their work, including the OECD. We are happy to be here. This has been an outstanding year for us at the OECD, for us who work in the AI agenda, for multiple reasons. One of the reasons is the fact that we have created a so-called integrated partnership with the Global Partnership on AI. So the family of countries that cooperate and share knowledge together, and not only knowledge, but hopefully also solutions. The family is expanding, so now we are covering 44 different jurisdictions from all around the world. I was trying to calculate actually what proportion of the world population we cover in the Global Partnership on AI now, and it’s 40% of the world population. So it’s really a significant club. Now, notwithstanding the enlargement and possible further enlargement in 2025, we managed, as was already mentioned by Lucia, to update the first ever intergovernmental document on AI, which was adopted in 2019 by the OECD, the so-called OECD AI principles, which were then later incorporated into G20 AI principles, into the first international convention on AI at the Council of Europe, with participation of non-European countries, and to some extent also into the AI Act of the European Union, with which some of you may be familiar with. So there are some successes that we really can look back at, and I must say I’m proud for the whole group that we managed. Now, when it comes to the reasons why we had to update the OECD AI principles in 2024, it was primarily for reasons of clarity, for reasons of reflecting on the latest technological development, and of course we had to take account of many different interests that have been raised. As you know, the OECD works, and now also the Global Partnership on AI, after the integration, we all work on a consensus basis. So in order to be able to actually come to any modifications, any updates, we had to listen to basically hundreds of people, not only people acting on behalf of the governments, but also people involved in the expert groups. You will learn more from Clara on the go. So this was a very interesting exercise, but surprisingly enough, we managed to have this revision updated in May by the ministers in Paris. Now, one of the key milestones that I would like to convey to you, on the basis of the work that we did, is actually the definition of the Artificial Intelligence as such. So when we discuss the impact of Artificial Intelligence on privacy or personal data, we really need to make sure that we discuss the same thing. In other words, what is actually the Artificial Intelligence when we talk about it? How we can recognize, or can we actually recognize and make a clear difference between AI and what we would call classical software systems? Now, you can judge our work. If you go to the OECD website, you will find an explanatory memorandum on the updated AI definition there. You will see how we actually arrived at the final solution. I recommend you to read this. And of course, there it is clear from the definition as such that any AI is highly dependent on data, on its quality, and of course, there is a clear bridge to the privacy concerns. The last thing in relation to the AI definition I would like to mention is, of course, the fact that the definition is imperfect by definition. In other words, it’s a work in progress. It will be reviewed again. And we also need to understand that making a clear line between software as we know it, or as we knew it, and the new elements that we call Artificial Intelligence is not necessarily as clear-cut as we would wish. We should rather see it as a scale, because also, of course, the systems that we call AI, they are also dependent and interact with classical software as well. So, it’s very delicate. Now, with the privacy, of course, we need to realize that, as I mentioned, AI is hungry for data. It needs data to be actually built and to work properly. The thing is that, of course, any restrictions on the use of data can be detrimental for building of AI models. At the same time, to complete the triangle, it’s not only about building of models and systems, but it’s of course also about the way security environments access information about us and evaluate possible threats and risks. So any limitations there, of course, interact also with this field, which is not always discussed, but we need to be aware of this. So it’s a delicate balance we need to draw between the protection of privacy on one hand, and security needs and the needs of building up of AI models and systems on the other hand. There are three principles in our OECD AI principles, which are foundational also now for the whole global partnership on AI community. And these three principles, they explicitly mention the need to protect privacy. But of course, we recognize that even inside this broad family of countries and jurisdictions, the approaches to privacy vary. And they are, of course, also contingent on certain cultural notions, on political approaches. So many issues are in place there. With that, I would like to commend the work of the expert groups. We have multiple groups comprised of experts feeding into the work of our bodies at the Global Partnership on AI and at the OECD. So this is a treasure, a big asset that we can build upon. You are all welcomed to find out more about the way we work. And of course, the more we can engage with you in a meaningful way, the more knowledge and the more understanding we can build. And last but not least, I would like to also commend the work of the UN Advisory Board on AI, of which Ximena is a distinguished member, for Mexico. If you look at the UN Advisory Board report that was published in September, and if you look at the UN Digital Compact that was adopted in New York City also in December, there you will find that basically when it comes to the first pillar of the UN Digital Compact, which is to create knowledge and understanding of the AI systems and the impacts on economy, society, etc., it is actually the OECD and the Global Partnership on AI that is relied on to feed into this first pillar of the UN Digital Compact to provide the necessary knowledge to share it with the global community. So besides the opportunity there at the OECD and Global Partnership on AI to engage with all of you, we can certainly then engage also at the global level together. With that, Lucia, thank you very much again for having me here today.
Lucia Russo: Thank you, Juraj, for providing this overview also of the most recent work of the OECD and what we have been engaged in during this past very busy year. So now I would like to welcome and turn to Ximena. Ximena, you are an international lawyer and scholar and advisor on AI and peace and security. You also lead a consultancy firm, iEquilibrium AI, which is specialized on AI and peace and security. And as we heard, you served as a member of the United Nations Secretary General High-Level Advisory Board on AI. What we would like to hear from you is if you could unpack the social risks that you have identified with the intersection of AI and privacy and perhaps also comment on how proposed UN recommendations aim to create a more robust global framework for responsible AI deployment. Thank you.
Jimena Viveros: Hello. I don’t know if anyone can hear me. Yes? Okay, great. So it is great to be here, sorry for the delay. So thank you for the introduction. And I would also like to start commending the work of the OECD and the new partnership with JIPAI, which I think is going to be very fruitful and going to be very good for advancing global governance and recommendations in this space. So I’m happy to be an expert in different of the working groups. I look forward to contributing on that. So as Juraj was saying, AI is data, so we cannot have AI without data. And data comes with privacy issues, that’s just a problem. So when we look at it from the perspective of peace and security at large, it brings a lot of problems. Because if we look at it even from, say, the civilian domain, we live in a society where everything we consume, it consumes back our data. Whether we willingly accept it or, you know, just because there’s no other choice. So all of that data gathering by all of these platforms is then fed into systems, which could be civilian, which could be military, which could be of some security organizations, intelligence organizations, and we don’t know what the purpose of it will be at the end. So we see this problem also in terms of all of the decision support systems. And for, say, autonomous weapons and other types of security implications that come along with the systems that work in this space. So we have a lot of complications regarding that. What we also find is now the big hype with generative AI and all of the breaches that come in that space, which we are all very familiar with. Which is all just exacerbated by the different jurisdictions and approaches that are being used universally. So what we’re witnessing is just a patchwork of initiatives. So that’s why we should really strive towards global governance. And the work that we did at the advisory body of the Secretary General leading to the Summit of the Future, and what was a part of the Global Digital Compact and the Pact for the Future, it included this because we mentioned the security problems that comes with all of this data breaches, hacking, misuse of information, malicious or unintended uses, both in the civilian and the military domain, which affects the broader international stability frameworks. So we, in the report, highlighted that even beyond the implications of data and privacy security problems at the individual or at the community level, there’s also very large-scale impact on society. And we say in the report that it could even affect democratic institutions as a whole, in terms of misinformation and the erroneous use of data, which can also affect the geopolitical, the economy in different parts of the world, in different regions, as we have seen already. Another problem that we have with data and with privacy in terms of security is the fact that we are now shifting the power dynamics of the world in terms of the technological dependency. So it’s not about who has the best systems, it’s about who has the best data or who has more data. And that is something that has been accumulated even years before AI was booming, like it is now. So we have a problem. We also have a problem in the lack of data. It’s a risk in itself because misrepresentation, bias, all of these things are a clear problem in terms of data. And this also affects the privacy of children. That’s a big risk that we have identified and everything regarding future generations. So now the question is what we can do about this. So first of all, we should really recognize data as a digital public good. This is something that is also stated in the Global Digital Compact and that has been quite at the high list of the agendas of the Secretary General. All these common digital goods. So data is one of them. And what we could do is create a global AI data framework to protect all kinds of human rights that can be affected by the use of data. And obviously implicating privacy issues. The GDC also offers some solutions, for example, awareness raising, capacity building, controlled cross-border data flows to foster a responsible, equitable, and interoperable framework to maximize the data benefits while minimizing the risks to data security and privacy. Because as I said, the lack of data is also a risk in itself. So that’s why it’s so important the work that the OECD and GPAI has been doing in this respect. Because it’s precisely that. Awareness raising and capacity building and just bringing experts together to come up with solutions. Because the risks and the problems we have identified many times. The thing is how to do it. do it and how to come up with actionable recommendations because this is vital. So the OECD recommendations that were revised now this year with all of the human-centered AI issues is vital and I recommend for whoever hasn’t read it that to read it because it’s a really important material that you can find there and obviously cooperation and synergies across organizations, across jurisdictions, across communities, across everything is vital because everything is complementary and everything helps. So with that I will close. Thank you.
Lucia Russo: Okay, thank you so much Ximena for this is really great work that you have been doing and outlining also the key risks and also some policy solutions already. So now I will move to Clara. As we heard from your eye the OECD has also established an expert group looking at particularly the interrelations between AI data and privacy and you are co-chairing that expert group. So what we would like to hear from you is what are the motivations that led to the establishment of this group but also what methodological approach you’re using to assess comprehensively privacy risks across the AI life cycle and lastly if you could please share with the audience the key findings that have emerged from the first report that was published with the support of the expert group.
Clara Neppel: Thank you for inviting me here as well and I’m very pleased to share our experience with this cross-section what you just mentioned, the collaboration between different communities. So as mentioned by both of my co-panelists before we had privacy issues with AI even before generative AI but this has been exacerbated with the vast collection of data across basically geographies but also the possibility to then re-identify individuals but also to identify let’s say characteristics which were not even disclosed in the first place. I think I very much like to say even very often you are surprised by the thing that the system knows about you which can be accurate or not accurate and if it’s accurate then you’re a kind of Orwellian space and if it’s not accurate you’re in a kind of a Kafka space. So luckily we now know that at least generative AI is not always to be relied on so I think that’s maybe the positive effect of let’s say the vast adoption of AI. So with the OECD that has been so active in AI governance as mentioned by Jura there are already a lot of expert groups so I’m part of the AI and climate expert group as well as on the AI futures expert group and I’m co-chairing now this expert group on AI data governance and privacy and so you asked me about the motivation of why we created this. So I think that in the AI communities you will find a lot of technologies of course also civil society and so on which are looking to the different aspects of AI that start realizing and also establishing frameworks for governance for these different aspects. In the data privacy community we already have an established frameworks, we have jurisdictions we know how to enforce, we have also institutions and of course methodologies. So what we saw in the AI space that there is a lot of innovation that you just mentioned also addressing privacy but without knowing that there is already a lot of work going on in the other community and the other way around. So this was I think the main motivation to bring these two communities together and establish this working group and indeed the first deliverable of this working group is this report that was published in June. So one of the deliverable outcomes was really also to map the principles, the AI principles to the privacy principles and as you can see here it’s a lot. I will just go into some which I think are specifically relevant. So principle one is really about inclusive growth, sustainable development and well-being and I think here it’s really something which is very close to my heart namely to weighing economic and social benefits of AI against risk to privacy right and really this for me translates to have the right balance between the metrics of success. So not only concentrate on profit and performance but really also on planet and people and I think that has a lot to do with what we just heard before and privacy being one of the important aspects here. The second is really about really respecting the rule of law, human rights and democratic values and here it’s also interesting to learn from each other’s terminology. So we both have established definition of what transparency means but it’s not for instance exactly the same justice for fairness in the AI space. Transparency relates more into how the system is set up, what kind of deliverable, so how understanding what the outcome is. In the privacy space it’s more about data collection and the intended use. So again we needed to map the different definitions also so that we have the same language and here I also see the human rights impact assessment. So I just had a session yesterday about Huderia which was set up by the Council of Europe, the human right impact assessment framework that also needs to be harmonized with the data protection requirements. So I already talked about transparency. I think robustness security is something that Ximena also alluded to. Here it’s also coordinating or data security technologies, privacy and enhancing technologies being for instance one of the most important ones. And last but not least it’s about also accountability and here I think that’s what we bring, let’s say I’m a technologist myself, what we bring to the data privacy community is the understanding of the technical aspects. So specifically to the AI lifecycle and where in the AI lifecycle privacy can play an important role. Also beyond data collection because also at the inference space and also other phases privacy is important. So yes the next one. So this is basically the AI lifecycle which is now the basis for further developing privacy related recommendations but also others. This is, as you can see, it starts from planning and design and what is new now, this was also revised, is that we have a new phase of retire and decommissioning. So it goes through collection and processing of data, building of models, testing, make it available for use and deployment operation and monitoring. And you can see here, so basically what we want to do now as a next step of our working group is to go to every phase and see which recommendations, policy recommendations we have for these phases. Especially when it comes to collection and processing of data we have to see what does it mean, you know, the limitation of AI when it comes to data collection, what does it mean if we are looking at a large language model, data scraping for the web, what are the privacy implications to that, which are of course a lot. What is the role of synthetic data? A lot of large language models are now fed by synthetic data which is also generated by models itself. So here I think it’s an important evolution that we also need to take into account. And of course data quality which as was mentioned before is important for accuracy but also for discrimination and bias. And we go further as you can see here then what is going to be important also to see what does it mean to have a right to forget in the AI systems, what kind of oversight and accountability as well as transparency measures we can put in place. We know now for the moment the data cards but we should work towards having more than that for transparency. And well I think that basically this is a work in progress. So as I said we want to go into each of these phases and also share and welcoming also inputs. Thank you.
Lucia Russo: Thank you, Clara, for this overview. This is really instructive as well for those of us who are not privacy experts as you are. And so it’s good to see how privacy affects each of the stages of the life cycle of an AI system. So I will now turn to Tiago, who is a specialist and he has the perspective also of the Brazilian Data Protection Authority. And what I would like to ask you is what are the most critical privacy challenges that you are observing in the context of advanced AI systems? And also on the practical side, how are data protection authorities developing practical approaches and solutions to protect privacy rights and while fostering innovation? I’ll come with the mic.
Thiago Guimarães Moraes: Okay, well, first of all, thanks a lot, Lucia, for the invitation, for not only the invitation to be here, but also for an invitation for being part of this community of the Group of Experts on AI Data and Privacy, which I’ve been following since the beginning of the year. So basically since its inauguration, right? And it has been amazing to be part of this community where we see the amazing work that has been done, which you just very accurately gave some highlights today. And I could start from here. I could say that many of these topics that have been just highlighted by Clara is part of the day-to-day critical thinking that regulators such as data protection authorities have been struggling on. And what I would like to share here, and starting from this challenges perspective, is that as privacy community starts to understand what AI governance and AI regulation means, when you start from the privacy, data protection, and standing point, is that you have to see how all these other values that now are coming. And that’s why I put that circle there, where we have like privacy, fairness, cybersecurity, transparency, human agency, and I know there are others, but these are some of the main values that we see in several frameworks. They come, and when you look at it in a more technical level, and you see the technical community is always thinking about trade-offs, which does make sense on a technical perspective, because what you’re trying as a technician is like to create parameters and try to see like how much you can achieve of any of these values. But at the same time, and as like anyone that works with policymaking, especially for a legal approach, human rights cannot be traded off. And that’s here one of the main challenges. We are talking about trade-off of values in a technical level that they cannot mean undermining of human rights. So this for me is the biggest challenge, and not only for regulators from the privacy field, but in any other, but for sure, since data protection authorities have been working with managing the rights to privacy and data protection, this means that this is our day-to-day looking how these measures are coming and working in a balancing of the human rights that we should be concerned about. And just to give an idea, and this is like the other images I’m sharing, the one in the middle, it just shows this very quite, I would say, a bit of a common sense that when we are talking about one of the main features of like de-identifying with the idea of anonymization, we have this anonymization like privacy utility trade-off, right? And this, of course, is just very illustrative. I’m putting this arc because we shared the work before where we show, okay, what we might be looking here is trying to find this optimal point where you can still assure some level of privacy, but guaranteeing utility for the system, of course. But when we go to real use case, things are not so simple, especially when we are considering other values. So just to consider between privacy and fairness, for example, fairness itself is already challenging for you to define on a technical level, and there are several parameters for that to try to guarantee some aspects of what fairness could mean in a technical level, like some ideas of group fairness and some parameters that try to translate what we should expect in the idea of statistical parity, for example. But when you add to that privacy issues and like how to bring, for example, privacy techniques, privacy enhancing techniques here, it gets even more challenging. And I’m just sharing here in the last part some of the work that we found of, like, this is not the NPD’s work, but it’s work of, like, the technical community that has been working, the privacy community who has been working on that, how they were trying to find this adequate balance on how, for example, you embed differential privacy, which is a technique that, like, for the privacy communities well know, how you can use that and try to find this balance. It’s still ensuring a good fairness level as well for this fairness parameters, for example. And what was particularly interesting in this research, and that’s why I’m sharing here, is that they found out that when you’re looking for federated learning models, which are models that are trained at local level, and then you aggregate all the data for the main AI model, you can apply differential privacy in the local parameters to ensure, first, a better privacy protection, because if you’re just applying the global level, you actually leave the local models unprotected from the privacy perspective. But another interesting thing is that you have to fine tune the level of noise that you’re adding on the differential privacy, because if you go too far, you don’t only lose accuracy, but you bring several issues of, like… Okay. Yeah. Okay. So, maybe now… Can you listen to me? Oh. Okay. Half of the room is listening to me. Yes? Okay. Good. So, well, then let’s turn then to the last slide, so I can… Oh, it’s me here. Oh, good. How do we go? So, this is just also very generally speaking, like, what DPAs such as… Yeah. Okay. Now… Okay. Yeah. I mean, every Internet Governance Forum, we have a free digital forum, we should have tech stuff, so we see how challenging that can be in practice, right? So, talking about… Ah, I see. Okay. Thanks for the tip. So, well, the DPAs, NPD has been doing that, but several others, they have been working first on guidance, like, so we can share best practice, good practice on some specific topics. Just recently, NPD has published a work on the DPAs, a work on the idea of, like, how generative AI is bringing some challenges for privacy, like what Clara just said, that sometimes synthetic content is being created, which it can infer some personal data, and it can infer in an accurate and an inaccurate way, and in both cases, there are consequences. So, we try to tackle a bit of, like, what’s part of this discussion, and we know some other peers have been doing the same, so we know, like, in France, for example, Kenil has been doing a very interesting work on that, and Singapore, the authority there, is doing a sandbox on the paths for generative AI, privacy-enhanced technologies for generative AI, so we see that, like, the works are doing, we are doing both in the theoretical level with guidance, but also more hands-on, like sandboxes. Us, as the Brazilian Data Protection Authority, we are starting a pilot sandbox next year on algorithmic transparency, so we can discuss this concept and what does this mean in the context of a data protection framework, like ours in this case, the Brazilian GPD, and besides that, we have been also, all the DPAs, I could say, have been asking themselves what are the roles, now that we have AI regulations coming up, and we have to think, should Should we be the main central AI authority, or even if that’s not the case, because sometimes can be very political level discussion, what will be our role and how we can ensure that our role is still guaranteed and protected, even like if we are now in a more complex environment where we have to work together with other regulators that are also dealing with data-related, data governance-related issues. So I think I’ll stop from here, but thanks again for the invitation.
Lucia Russo: Thank you, Tiago. It works? Okay. So I have some follow-up questions, but I would like this to be also a conversation with you. So with Mike, maybe I give you this.
Audience: Thank you very much. You can hear okay? Thank you very much for these very interesting interventions. The issue… That’s a really great example. Hold it like this. Okay. That’s okay? Okay. Thank you. This is a really nice case study of what’s happening across technologies, of this issue of convergence. I’m with UNICEF. We’re looking at… I’ve led work on AI and the high-level advisory board and how AI impacts children, and we’re looking at how neurotechnology impacts children. And AI and neurotechnology have converged. So these issues, privacy is the issue, but if you even look at the technologies, whose responsibility is it to set the governance rules? So I was really interested to hear about the working group. And my question, Clara, is kind of what’s the end goal of this interesting and useful exercise? Because it sounds like there are some governance recommendations within the AI space, within the privacy space. You’re looking at kind of mapping them, but what’s the output? Is it a new merge set, or is it kind of an update on both sides, or do we update the principles from time to time, which is necessary? UNICEF also has recommendations for AI and children, and we’ve been reflecting. They came out 2021. The world has changed. It’s time to refresh those. The principles stay, but how you apply them is… So yes, that’s… Where do we go from here? Thank you.
Clara Neppel: Thank you. And thank you for bringing up the issue of children. And actually, I also wanted to bring it because it’s, I think, a big issue, not only on privacy, but also on mental health of our future generation. So I think that we have different issues how we can tackle this. So just to give you an example of now specifically age-appropriate design, because that is something which I think we need to take into account in the AI design system. And we are working, for instance, IEEE with the Five Rights Foundation to set up, I hope, a universal standard of how to collect children’s data. Okay? If you can hear me. And so I think that is one of the practical examples of what we can do for the moment on a voluntary basis. But in certain jurisdictions, it is already like in, I think, in the UK, where it is obligatory. So also, I think this is what we want to do in the working group, is first of all, understand what are the issues, what are already… Do we already have solutions that we can leverage from each other? To identify the gaps, and some of them would be very certainly policy recommendations, but we also very clearly want to target the developers, for instance, when it comes to scraping data, so that they understand what the policy implications are… Sorry, what the legal implications are, because a lot of them don’t have that. So it’s both sides.
Lucia Russo: Is there any other question? Yes.
Audience: Hello. Thank you so much for the presentation. I’m from Nanting Youth Development Service Centre, and in my study field, there’s this technology called blockchain, specifically for data, and by storing data in multiple different sectors, any slight changes to the data can be tracked and detected. But at this, very much protected transparency of data, but this technique in itself is at the centre of debate on privacy. So, like you said, it’s like a trade-off system. So I want to know how you guys think of this technology and how we can actually find that balance.
Thiago Guimarães Moraes: Okay. Okay, yes. So, actually… Does it work? Okay. So, yeah, thanks for the question, because it’s actually very important, and then I can try to give an idea of, like, as a DPA, I shared the experience of the Brazilian Data Protection Authority, but also from what we heard of other peers, some similar approach happened. Like, we do look… Usually, most data protection authorities, DPAs, as we call, we have units that work with monitoring technology progress. I am part of one of these units there in Brazil. We know other institutions, like in the UK, in France, they have something similar. The interesting thing about this innovation technology monitoring units is that they have to look not only to AI, but several other technologies, like blockchain. So blockchain, for example, is a topic that we also follow. We have, like… Part of our team is working on looking on specific privacy-related issues, and with the blockchain technologies. One very big challenge, and when we’re talking about privacy and blockchain, is because usually when you register information, the blockchain, it stays in the blockchain. And we do have a right for elimination of personal data. So how can we do that, right, if personal data is embedded in the blockchain? So what I can say for you is that this is part of the discussion that we’re having. It’s very challenging when we decide to provide a solution, because we have to be very sure what we are proposing in a policy level. So far, as I know, this is a topic that the privacy regulators, the privacy community has been discussing, but I am not aware of a very strong argument of how it should work. And I do believe that what we need to come to this answer is to be better engaged with the technical community that’s working on that. And we’ve seen this work happening in the AI governance level. I can say this work of the OECD is a big example, and I think we should have more of the same in the blockchain discussion, because eventually we will actually be seeing these two emerging technologies getting together more and more as time passes by. So thanks for bringing blockchain to the discussion.
Juraj Äorba: If I may, I’ll just briefly intervene. One, two, three. Do you hear me? Okay. So on the topic of converging technologies, just like blockchain and other, it is very important actually to realize that when we talk about privacy and AI, we cannot really discuss only this. You really need to have a whole picture of the whole digital stack. In other words, we can hardly talk about governance of privacy in AI without actually fully understanding the implications of digital platforms for privacy and the way platforms are being driven by AI or enabling AI via collection of data about the users, right? The same applies to Internet of Things, because those data taken from the sensors from the Internet of Things, they will feed into the AI systems. The same applies to digital finance, possibly now also to some new efforts in the field of biology, which can be even more delicate when it comes to privacy and our biological predispositions and design. So there, I think it’s a very good example. Also, of course, blockchain as well. But this point that was raised really leads us to the necessity to have a full picture of how these different digital spheres interact and how they are integrated into the most sophisticated services and products available on the market, because the most successful ones on the market, they manage to integrate all these environments together. And of course, then the implications for privacy are even more imminent.
Jimena Viveros: So just to add on the conversation, I think what we, if we’re talking about human rights, and I think we should, start from there. The right to privacy stems from the right to identity, right? So, which also is very linked with the right to be forgotten, right? So, what we’re witnessing now is we’re trying to create or foster the protection of our personal digital identity or signature or print, and that is a new type of concept that we haven’t had thought about before. So, when this information, our personal information, especially when it comes to biometrics, genomics, neurotechnologies that are happening, and all types of personal and information, when it’s locked into something, such as blockchain or any other technology or any other environment, it’s complicated, and especially because sometimes the capturing of this information isn’t necessarily consensual or well-informed, and this is a problem that has been happening for a long time, but the situation now is what is being given to this information, whether it’s locked, whether it’s just being captured now or whatever is happening. So, because, again, coming back to the implications on peace and security, I mean, we can think of, in law enforcement, you know, predictive policing, we can think of border control, we can think, again, with biometrics, which is pretty dangerous, and then even in governmental services, you know, even access to healthcare, access to loans, access to financial services, housing, whatever, like all of these things are being predetermined by the data that is stored and how they represent or misrepresent a person. So, I think it’s very important to remember that at the basis of privacy, it’s identity, and that is the one most precious thing that we have, and that’s why we should all strive to protect it.
Lucia Russo: Thank you so much, all the speakers. We have two questions here and one online. Okay, I think I’ll take the one online first, but we have only two minutes to go. So, please, quick reaction from the speakers. How do we deal with privacy by design within the AI changing state? One quick reaction so that we can hear another question from the floor.
Thiago Guimarães Moraes: Well, it was, it’s very welcome, this discussion, because when we started discussing by design process, like privacy by design, we are talking, we’re asking how we go hands-on from now, right? Okay, we are building amazing policy frameworks, but how these frameworks translate to concrete considerations, and what I can say that has been proven to be a nice experience from the part of the DPAs is using the sandboxes, because all the privacy sandboxes that have been organized, like from CNIL, from the Norwegian DPA, the ICO, Brazil now, Singapore, we see that like we usually bring some discussion that in the end of the day, what we are trying to test with that particular given technology, like AI, for example, but it can also be blockchain technologies or just a data sharing practice, is come up in the end with like good practice and way of like how this actually is a practical experimentation on privacy by design. So I would stop here, because I know we don’t have much time.
Clara Neppel: I would just like to add one sentence here. I think some of the issues are so important that it should be enforced, like coming back to the children, I think that the collection of children data should be really regulated, because that has enormous implications for them and for our society. And for others, I think that the context will be important. Again, what was mentioned before, privacy is dependent on the context. Some things need to be enforced by regulation. Sometimes we need to take privacy into account for a specific use and without trade-offs, hopefully, or having the optimal trade-off. Thank you.
Lucia Russo: Thank you. I think we are at time and five minutes. Thank you. Okay, so please.
Audience: Thank you very much. Martina Legal Malakova from Slovakia. I have the question for Emina. Have you seen the lawyers today could protect the human rights that today for the new emerging technologies, we don’t have often the laws, but we have only principles? Thank you.
Jimena Viveros: Yes, so that’s a problem indeed. I mean, these principles, these guidelines are very useful as stepping stones, but they’re not binding. And then we come to the problem of enforceability. So, what we need is the adoption of the standards, protocols, guidelines, principles, however you want to call them or frame them, and get them adopted at the national level and push for them to be actually consolidated into global governance. Because if we don’t have a global framework that’s global, like internationally, because all of this is transboundary. So, if you know, like, again, coming back to what I said at the beginning, we just have like this patchwork of initiatives, even if the regional, that’s not enough. We need something that’s global, so that everyone is protected in the same way, because our information is everywhere. So, we just need to convert principle into action and action that is enforceable, that it can be monitored, that can be, you know, verified, and that there’s proper oversight mechanisms. So, that’s why I was mentioning before, like a centralized authority that controls all of this and conducts all of this oversight at the international level would be a good approach. But in the meantime, all we can do, and it’s very valuable work, is these principles, which are ethical values, whatever, but stemming always from human rights, which already are existing. And the problem now that we’re facing is the revamping of even those basic human rights that have been there for the past 70 years. But with the excuse of AI, everyone is kind of like opening up the box again and rethinking whether they are applicable. They’re always applicable, but we just need to find the way to integrate it into the reality that we’re living in. So, the solution is get governments to regularize it in a harmonized way, and then make it a global governance regime. Thank you.
Lucia Russo: Okay, the very, very last question.
Audience: Thank you so much for your presentation. My name is Hasara Tebi. I’m from Mawadda Association for Family Stability at Riyadh, Saudi Arabia. What I have is actually not a question. It’s an input, a fit of mind. So, the rapid advancement of AI technology had led to increased collection and processing of personal data, overturned without sufficient safeguards to protect privacy. Innovation in AI heavily relies on vast amounts of data, heightening the risk of privacy violation and misuse of data in ways that can harm individuals. There is a growing concern that current legalization lags behind technological progress, creating gaps that allow the explosion of personal information without explicit consent or comprehensive understanding by individuals. We call to strengthen legal frameworks, update and enhance legislation to ensure effective privacy protection in the age of AI, ensure transparency and accountability, require companies and organizations to clearly disclose how data is collected and used, while implementing the robust accountability mechanism for violations. Engage civil society to include civil society organization and users in the development of AI and privacy-related policies and regulations. We recommended the following. Develop impact assessment tools, create and utilize tools to assess the impact of AI technologies on privacy before their implementation, raise awareness and provide training, organize training programs for developers and policy makers to emphasize the importance of privacy and strategies to protect it during the design and deployment of AI system. And finally, encourage exceptional initiatives such as His Royal Highness the Crown Prince’s Global Child Protection in Cyberspace, CBC initiatives, which aims to strengthen collective action, unify international efforts and raise global awareness among decision makers about the growing threat to children in cyberspace. Thank you.
Lucia Russo: Thank you so much and I think we couldn’t have a better way to end this passionating debate. I think we could have gone on and on discussing with you. It’s a topic that deserves a lot of the policy attention as we are seeing and this is really at the core of the discussions that we are undertaking in the international AI governance and privacy sphere. So with that, I would really like to thank the distinguished speakers here, Juraj, Ximena, Clara, Tiago for their excellent contributions, but as well as the audience for participating so vividly in this discussion with us. Thank you.
Juraj Čorba
Speech speed
132 words per minute
Speech length
1485 words
Speech time
674 seconds
AI systems are highly dependent on data, creating privacy concerns
Explanation
Juraj Čorba emphasizes that AI systems require large amounts of data to function properly. This dependency on data raises significant privacy concerns as it involves collecting and processing vast amounts of information, potentially including personal data.
Evidence
The OECD AI principles explicitly mention the need to protect privacy.
Major Discussion Point
Privacy Challenges in Advanced AI Systems
Agreed with
Clara Neppel
Thiago Guimaraes Moraes
Jimena Viveros
Agreed on
AI systems pose significant privacy challenges
OECD updated AI principles to reflect technological developments and privacy concerns
Explanation
Juraj Čorba discusses the recent update to the OECD AI principles. The revision was made to address the evolving technological landscape and increased challenges raised by advanced AI systems, including privacy rights.
Evidence
The updated OECD AI principles were adopted in May by ministers in Paris.
Major Discussion Point
Policy and Governance Approaches for AI and Privacy
Agreed with
Jimena Viveros
Thiago Guimaraes Moraes
Agreed on
Need for global governance and harmonized regulations
Convergence of AI with other technologies like blockchain and neurotechnology creates new privacy challenges
Explanation
Juraj Čorba points out that AI is converging with other technologies such as blockchain and neurotechnology. This convergence creates new and complex privacy challenges that need to be addressed.
Evidence
Examples of converging technologies mentioned include Internet of Things, digital finance, and biology.
Major Discussion Point
Intersections of AI, Privacy, and Other Technologies
Clara Neppel
Speech speed
137 words per minute
Speech length
1492 words
Speech time
648 seconds
Generative AI exacerbates privacy issues through vast data collection and potential re-identification
Explanation
Clara Neppel highlights that generative AI has intensified privacy concerns due to its extensive data collection practices. This technology also introduces the possibility of re-identifying individuals or revealing characteristics that were not initially disclosed.
Evidence
Neppel mentions the surprise people often experience when AI systems know things about them that weren’t explicitly shared.
Major Discussion Point
Privacy Challenges in Advanced AI Systems
Agreed with
Juraj Čorba
Thiago Guimaraes Moraes
Jimena Viveros
Agreed on
AI systems pose significant privacy challenges
Importance of weighing economic benefits of AI against privacy risks
Explanation
Clara Neppel emphasizes the need to balance the economic and social benefits of AI against potential privacy risks. She suggests that success metrics should not only focus on profit and performance but also consider impacts on people and the planet.
Evidence
Neppel refers to the OECD AI principle of inclusive growth, sustainable development, and well-being.
Major Discussion Point
Balancing Innovation and Privacy Protection in AI
Differed with
Thiago Guimaraes Moraes
Differed on
Approach to privacy protection in AI systems
Importance of age-appropriate design and protecting children’s data
Explanation
Clara Neppel stresses the significance of age-appropriate design in AI systems, particularly concerning the collection of children’s data. She highlights this as a crucial issue not only for privacy but also for the mental health of future generations.
Evidence
Neppel mentions IEEE’s work with the Five Rights Foundation to establish a universal standard for collecting children’s data.
Major Discussion Point
Balancing Innovation and Privacy Protection in AI
Thiago Guimarães Moraes
Speech speed
132 words per minute
Speech length
1855 words
Speech time
839 seconds
Trade-offs between privacy, fairness, and utility in AI systems pose challenges
Explanation
Thiago Guimaraes Moraes discusses the complex trade-offs between privacy, fairness, and utility in AI systems. He points out that while technicians often think in terms of trade-offs, from a human rights perspective, these values cannot be compromised.
Evidence
Moraes provides an example of the challenge in balancing privacy and fairness in federated learning models using differential privacy techniques.
Major Discussion Point
Privacy Challenges in Advanced AI Systems
Agreed with
Juraj Čorba
Clara Neppel
Jimena Viveros
Agreed on
AI systems pose significant privacy challenges
Differed with
Clara Neppel
Differed on
Approach to privacy protection in AI systems
Data protection authorities are developing guidance and sandboxes to address AI privacy issues
Explanation
Thiago Guimaraes Moraes explains that data protection authorities are creating guidance documents and implementing sandbox environments to address privacy challenges in AI. These efforts aim to share best practices and provide practical solutions for privacy protection in AI systems.
Evidence
Moraes mentions the Brazilian Data Protection Authority’s upcoming pilot sandbox on algorithmic transparency.
Major Discussion Point
Policy and Governance Approaches for AI and Privacy
Agreed with
Juraj Čorba
Jimena Viveros
Agreed on
Need for global governance and harmonized regulations
Need for privacy-enhancing technologies and techniques like differential privacy
Explanation
Thiago Guimaraes Moraes emphasizes the importance of privacy-enhancing technologies and techniques, such as differential privacy, in addressing AI privacy challenges. These approaches can help balance privacy protection with maintaining utility and fairness in AI systems.
Evidence
Moraes references research on applying differential privacy in federated learning models to enhance privacy protection.
Major Discussion Point
Balancing Innovation and Privacy Protection in AI
Challenges of implementing “privacy by design” in rapidly changing AI landscape
Explanation
Thiago Guimaraes Moraes discusses the difficulties of implementing privacy by design principles in the context of rapidly evolving AI technologies. He emphasizes the need to translate policy frameworks into concrete considerations for AI developers.
Evidence
Moraes mentions the use of regulatory sandboxes by various data protection authorities to test and develop good practices for privacy by design in AI systems.
Major Discussion Point
Balancing Innovation and Privacy Protection in AI
Blockchain’s immutability poses challenges for data deletion rights
Explanation
Thiago Guimaraes Moraes highlights the conflict between blockchain technology’s immutability and the right to erasure of personal data. This creates a significant challenge for privacy protection in blockchain-based systems.
Evidence
Moraes mentions that this is an ongoing discussion in the privacy community, but no strong solutions have been proposed yet.
Major Discussion Point
Intersections of AI, Privacy, and Other Technologies
Jimena Viveros
Speech speed
143 words per minute
Speech length
1539 words
Speech time
645 seconds
AI data collection and use can affect democratic institutions and geopolitics
Explanation
Jimena Viveros points out that the extensive data collection and use by AI systems can have far-reaching impacts beyond individual privacy. She argues that these practices can affect democratic institutions and geopolitical dynamics on a large scale.
Evidence
Viveros references the potential for AI-driven misinformation and erroneous use of data to impact democratic processes and regional economies.
Major Discussion Point
Privacy Challenges in Advanced AI Systems
UN recommendations aim to create a global AI data framework to protect human rights
Explanation
Jimena Viveros discusses the UN’s efforts to establish a global framework for AI data governance. This framework aims to protect various human rights that can be affected by AI’s use of data, with a focus on privacy protection.
Evidence
Viveros mentions the Global Digital Compact and its proposals for awareness raising, capacity building, and controlled cross-border data flows.
Major Discussion Point
Policy and Governance Approaches for AI and Privacy
Need for global governance and harmonized regulations to address transboundary nature of data
Explanation
Jimena Viveros emphasizes the necessity for global governance and harmonized regulations in AI and data protection. She argues that the transboundary nature of data requires a unified international approach rather than a patchwork of regional initiatives.
Evidence
Viveros suggests the creation of a centralized international authority for oversight and monitoring of AI and data governance.
Major Discussion Point
Policy and Governance Approaches for AI and Privacy
Agreed with
Juraj Čorba
Thiago Guimaraes Moraes
Agreed on
Need for global governance and harmonized regulations
AI’s use of biometric and genomic data raises concerns about digital identity protection
Explanation
Jimena Viveros highlights the privacy risks associated with AI’s use of sensitive biometric and genomic data. She emphasizes the importance of protecting individuals’ digital identities, which are closely linked to the right to privacy and the right to be forgotten.
Evidence
Viveros mentions examples of how this data could be used in law enforcement, border control, and access to various services like healthcare and finance.
Major Discussion Point
Intersections of AI, Privacy, and Other Technologies
Agreed with
Juraj Čorba
Clara Neppel
Thiago Guimaraes Moraes
Agreed on
AI systems pose significant privacy challenges
Agreements
Agreement Points
AI systems pose significant privacy challenges
Juraj Čorba
Clara Neppel
Thiago Guimaraes Moraes
Jimena Viveros
AI systems are highly dependent on data, creating privacy concerns
Generative AI exacerbates privacy issues through vast data collection and potential re-identification
Trade-offs between privacy, fairness, and utility in AI systems pose challenges
AI’s use of biometric and genomic data raises concerns about digital identity protection
All speakers agreed that advanced AI systems, particularly generative AI, pose significant privacy challenges due to their extensive data requirements and potential for re-identification or misuse of personal information.
Need for global governance and harmonized regulations
Juraj Čorba
Jimena Viveros
Thiago Guimaraes Moraes
OECD updated AI principles to reflect technological developments and privacy concerns
Need for global governance and harmonized regulations to address transboundary nature of data
Data protection authorities are developing guidance and sandboxes to address AI privacy issues
The speakers emphasized the importance of developing global governance frameworks and harmonized regulations to address the transboundary nature of data and AI-related privacy challenges.
Similar Viewpoints
Both speakers highlighted the need to balance the benefits of AI against potential risks to privacy and broader societal impacts, including effects on democratic institutions and geopolitics.
Clara Neppel
Jimena Viveros
Importance of weighing economic benefits of AI against privacy risks
AI data collection and use can affect democratic institutions and geopolitics
Both speakers emphasized the importance of implementing specific technical and design measures to enhance privacy protection in AI systems, particularly for vulnerable groups like children.
Thiago Guimaraes Moraes
Clara Neppel
Need for privacy-enhancing technologies and techniques like differential privacy
Importance of age-appropriate design and protecting children’s data
Unexpected Consensus
Convergence of AI with other technologies creating new privacy challenges
Juraj Čorba
Thiago Guimaraes Moraes
Convergence of AI with other technologies like blockchain and neurotechnology creates new privacy challenges
Blockchain’s immutability poses challenges for data deletion rights
While the focus was primarily on AI, there was unexpected consensus on the need to consider privacy challenges arising from the convergence of AI with other emerging technologies like blockchain and neurotechnology.
Overall Assessment
Summary
The speakers generally agreed on the significant privacy challenges posed by advanced AI systems, the need for global governance frameworks, and the importance of balancing innovation with privacy protection. There was also consensus on the need to consider the convergence of AI with other technologies in addressing privacy issues.
Consensus level
High level of consensus among speakers, suggesting a strong foundation for developing comprehensive approaches to AI governance and privacy protection. This consensus implies that future policy discussions and regulatory efforts may focus on implementing globally harmonized frameworks that address the complex interplay between AI, privacy, and other emerging technologies.
Differences
Different Viewpoints
Approach to privacy protection in AI systems
Clara Neppel
Thiago Guimaraes Moraes
Importance of weighing economic benefits of AI against privacy risks
Trade-offs between privacy, fairness, and utility in AI systems pose challenges
While Clara Neppel emphasizes balancing economic benefits against privacy risks, Thiago Guimaraes Moraes highlights the challenges in balancing privacy, fairness, and utility, stating that from a human rights perspective, these values cannot be compromised.
Unexpected Differences
Role of blockchain in privacy protection
Thiago Guimaraes Moraes
Juraj Čorba
Blockchain’s immutability poses challenges for data deletion rights
Convergence of AI with other technologies like blockchain and neurotechnology creates new privacy challenges
While both speakers mention blockchain, their perspectives differ unexpectedly. Thiago Guimaraes Moraes focuses on the challenges blockchain poses for data deletion rights, while Juraj Čorba sees blockchain as part of a broader convergence of technologies creating new privacy challenges.
Overall Assessment
summary
The main areas of disagreement revolve around the approach to balancing privacy protection with innovation in AI, the specific methods for implementing privacy safeguards, and the role of emerging technologies in privacy challenges.
difference_level
The level of disagreement among the speakers is moderate. While they generally agree on the importance of privacy protection in AI systems, they differ in their approaches and emphasis on specific aspects. These differences reflect the complexity of the issue and the need for multifaceted solutions in AI governance and privacy protection.
Partial Agreements
Partial Agreements
Both speakers agree on the need for enhanced privacy protection, particularly for children’s data. However, they propose different approaches: Clara Neppel suggests age-appropriate design and universal standards, while Thiago Guimaraes Moraes focuses on privacy-enhancing technologies like differential privacy.
Clara Neppel
Thiago Guimaraes Moraes
Importance of age-appropriate design and protecting children’s data
Need for privacy-enhancing technologies and techniques like differential privacy
Similar Viewpoints
Both speakers highlighted the need to balance the benefits of AI against potential risks to privacy and broader societal impacts, including effects on democratic institutions and geopolitics.
Clara Neppel
Jimena Viveros
Importance of weighing economic benefits of AI against privacy risks
AI data collection and use can affect democratic institutions and geopolitics
Both speakers emphasized the importance of implementing specific technical and design measures to enhance privacy protection in AI systems, particularly for vulnerable groups like children.
Thiago Guimaraes Moraes
Clara Neppel
Need for privacy-enhancing technologies and techniques like differential privacy
Importance of age-appropriate design and protecting children’s data
Takeaways
Key Takeaways
Advanced AI systems pose significant privacy challenges due to their reliance on vast amounts of data
There is a need for global governance frameworks and harmonized regulations to address AI privacy issues
Balancing innovation with privacy protection is a key challenge in AI development and deployment
The convergence of AI with other technologies creates new privacy risks that need to be addressed
Protecting children’s data and implementing age-appropriate design in AI systems is crucial
Resolutions and Action Items
OECD expert group to continue mapping AI principles to privacy principles and develop recommendations for each stage of the AI lifecycle
Data protection authorities to provide guidance and conduct regulatory sandboxes on AI privacy issues
Push for adoption of international AI governance standards at national levels
Unresolved Issues
How to implement privacy-by-design principles in rapidly evolving AI systems
Balancing trade-offs between privacy, fairness, and utility in AI systems
Addressing privacy challenges posed by blockchain and other emerging technologies in conjunction with AI
Enforceability of non-binding AI ethics principles and guidelines
Suggested Compromises
Using differential privacy techniques to balance privacy protection and data utility in AI systems
Developing universal standards for collecting children’s data that balance protection and innovation
Considering context-specific approaches to privacy regulation, with stricter enforcement for critical areas like children’s data
Thought Provoking Comments
AI is data, so we cannot have AI without data. And data comes with privacy issues, that’s just a problem.
speaker
Jimena Viveros
reason
This succinctly captures the fundamental tension between AI development and privacy concerns.
impact
It set the tone for much of the subsequent discussion about balancing AI innovation with privacy protections.
We should really recognize data as a digital public good.
speaker
Jimena Viveros
reason
This reframes how we think about data ownership and governance in the AI era.
impact
It sparked discussion about creating global frameworks for data and AI governance to protect rights while fostering innovation.
Human rights cannot be traded off. And that’s here one of the main challenges. We are talking about trade-off of values in a technical level that they cannot mean undermining of human rights.
speaker
Thiago Guimaraes Moraes
reason
It highlights the tension between technical optimization and fundamental rights protection in AI development.
impact
It shifted the conversation to focus more on how to implement human rights protections in practice when developing AI systems.
We can hardly talk about governance of privacy in AI without actually fully understanding the implications of digital platforms for privacy and the way platforms are being driven by AI or enabling AI via collection of data about the users, right?
speaker
Juraj Čorba
reason
This comment emphasizes the interconnected nature of AI, privacy, and other digital technologies.
impact
It broadened the scope of the discussion to consider AI privacy issues in the context of the entire digital ecosystem.
At the basis of privacy, it’s identity, and that is the one most precious thing that we have, and that’s why we should all strive to protect it.
speaker
Jimena Viveros
reason
This comment gets to the core of why privacy matters, connecting it to fundamental human rights and identity.
impact
It refocused the discussion on the human impact of privacy violations and the importance of protecting individual identity in the digital age.
Overall Assessment
These key comments shaped the discussion by highlighting the complex interplay between AI development, data usage, and privacy protection. They moved the conversation from abstract principles to practical challenges in implementing privacy safeguards, while emphasizing the need for global cooperation and human rights-based approaches. The discussion evolved from technical considerations to broader societal implications, underscoring the multifaceted nature of AI governance and privacy protection in the digital age.
Follow-up Questions
How to balance economic and social benefits of AI against risks to privacy rights?
speaker
Clara Neppel
explanation
This is a key challenge in developing AI governance frameworks that protect privacy while enabling innovation.
How to implement age-appropriate design in AI systems to protect children’s privacy and mental health?
speaker
Clara Neppel
explanation
Protecting children’s data and wellbeing is a critical issue as AI systems become more prevalent.
How to address the challenge of the right to erasure of personal data in blockchain systems?
speaker
Thiago Guimaraes Moraes
explanation
This highlights the tension between blockchain’s immutability and data protection rights.
How to create a global AI data framework to protect human rights affected by data use?
speaker
Jimena Viveros
explanation
A global framework is needed to address the transboundary nature of AI and data flows.
How to develop practical, enforceable global governance mechanisms for AI and privacy?
speaker
Jimena Viveros
explanation
Moving from principles to enforceable rules is crucial for effective AI governance.
How to implement privacy by design within the rapidly changing AI landscape?
speaker
Audience member (online)
explanation
This is important for proactively addressing privacy concerns in AI development.
How to strengthen legal frameworks to ensure effective privacy protection in the age of AI?
speaker
Hasara Tebi (audience member)
explanation
Updating legislation is crucial to keep pace with technological advancements in AI.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online