DC-CIV & DC-NN: From Internet Openness to AI Openness
DC-CIV & DC-NN: From Internet Openness to AI Openness
Session at a Glance
Summary
This discussion focused on exploring the potential application of core Internet values and principles to AI governance and openness. Participants debated whether concepts like openness, interoperability, and non-discrimination that have shaped Internet development could be extended to AI systems. Many speakers emphasized that while AI and the Internet are distinct, there are important interconnections to consider as AI increasingly becomes an intermediary layer between users and online content.
Key points of discussion included the need for transparency and accountability in AI systems, concerns about AI amplifying existing biases and power imbalances, and debates over appropriate regulatory approaches. Some argued for applying Internet openness principles to AI, while others cautioned against direct equivalence given AI’s distinct nature. The importance of human rights frameworks in AI governance was highlighted, as was the need to consider societal and collective rights alongside individual protections.
Participants explored tensions between permissionless innovation and precautionary regulation for AI. There were differing views on the degree of standardization and interoperability needed for AI systems compared to Internet infrastructure. The discussion touched on challenges around AI safety, liability, and the concentration of AI development among a few large companies.
Overall, the session illuminated the complex considerations involved in developing governance frameworks for AI that balance innovation with responsibility. While no clear consensus emerged, the dialogue highlighted important areas for further exploration as AI governance evolves, including potential lessons from Internet governance experiences.
Keypoints
Major discussion points:
– The relationship and differences between AI and the internet
– Applying internet governance principles like openness and interoperability to AI
– The need for AI regulation and accountability, especially regarding risks and harms
– Balancing innovation with safety and human rights considerations in AI development
– The impact of AI on internet usage and information access
The overall purpose of the discussion was to explore whether and how core internet governance principles and values could be applied to AI governance and regulation. The participants aimed to identify lessons learned from internet governance that could inform approaches to AI.
The tone of the discussion was thoughtful and analytical, with participants offering different perspectives and occasionally disagreeing. There was a sense of grappling with complex issues without clear solutions. The tone became slightly more urgent near the end when discussing concrete next steps and the need for action on AI governance.
Speakers
– Luca Belli: Co-moderator
– Olivier Crépin-Leblond: Co-moderator
– Renata Mielli: Advisor at the Ministry of Science and Technology of Brazil, Chairwoman of CGI (Brazilian Internet Steering Committee)
– Anita Gurumurthy: IT4Change
– Sandrine Elmi Hersi: Leads ARCEP’s (French Regulator) Unit on Internet Openness
– Vint Cerf: Internet pioneer, Chief Internet Evangelist and Vice President at Google
– Yik Chan Chin: Professor at University of Beijing, leads work of PNAE (Policy Network on AI)
– Sandra Mahannan: Data scientist analyst at Unicorn Group of Companies
– Alejandro Pisanty:
– Wanda Muñoz: Member of the feminist AI research network in UNESCO’s Women into Ethical AI platform
Additional speakers:
– Desiree: Audience member
Full session report
Expanded Summary of AI Governance Discussion
This discussion explored the potential application of core Internet values and principles to AI governance and openness. Participants debated whether concepts like openness, interoperability, and non-discrimination that have shaped Internet development could be extended to AI systems. The dialogue illuminated the complex considerations involved in developing governance frameworks for AI that balance innovation with responsibility.
Key Themes and Debates
1. Relationship between AI and Internet Governance
A central point of discussion was the relationship between AI and Internet governance principles. Vint Cerf, an Internet pioneer, emphasised that AI and the Internet are fundamentally different technologies requiring distinct governance approaches. However, other speakers like Luca Belli and Renata Mielli argued that some core Internet values, such as transparency and accountability, could apply to AI governance. Mielli specifically mentioned the Brazilian Internet Steering Committee’s principles as potentially applicable to AI governance.
There was general agreement that AI is creating a new intermediary layer between users and Internet content, which could significantly change how users access and interact with online information. Sandrine Elmi Hersi noted that this could potentially restrict user agency and transparency in accessing online information. She highlighted predictions that search engine traffic could decline by 25% by 2026 due to AI chatbots, emphasizing the impact of generative AI on internet openness.
2. Openness and Interoperability in AI Systems
The concept of openness in AI systems sparked debate. Anita Gurumurthy argued that openness in AI is complex and doesn’t necessarily lead to transparency or democratisation. She critiqued the term “open” as potentially misleading in the context of AI. Vint Cerf pointed out that AI systems are mostly proprietary and not interoperable, unlike Internet protocols. Yik Chan Chin added that standardisation and interoperability of AI systems are extremely difficult currently.
3. Human Rights and AI Governance
Wanda Muñoz strongly advocated for a human rights-based approach to AI governance, beyond just ethics and principles. She emphasized the need for accountability, remedy, and reparation when violations of human rights result from AI use. This perspective shifted the discussion towards considering AI governance in terms of concrete human rights obligations and mechanisms for redress.
4. Regulation and Governance Approaches
Participants offered various perspectives on how to approach AI regulation:
– Vint Cerf suggested that accountability is important in the AI world, and parties offering AI-based applications should be held accountable for any risks these systems pose.
– Sandra Mahannan proposed that regulation should focus more on AI developers and models rather than users, highlighting the importance of data quality and challenges faced by smaller players in the AI industry.
– Yik Chan Chin emphasised the need for global coordination on AI risk categorisation, liability frameworks, and training data standards.
– Alejandro Pisanty cautioned against trying to regulate AI in general, suggesting instead to focus on specific applications. He also stressed the importance of separating the effects of human agency from the technology itself in AI governance.
5. Impacts and Risks of AI Systems
Several speakers highlighted potential risks and impacts of AI systems:
– Wanda Muñoz warned that AI systems can perpetuate and amplify existing societal biases and discrimination.
– Yik Chan Chin noted that AI poses new cybersecurity risks to Internet infrastructure.
– Alejandro Pisanty raised concerns that generative AI could lead to loss of information detail and accuracy.
Thought-Provoking Insights
Several comments shifted the discussion in notable ways:
1. Anita Gurumurthy challenged the current paradigm of data and wealth concentration in the tech industry, suggesting alternative paths for more distributed value creation.
2. Vint Cerf’s distinction between AI and the Internet prompted more careful consideration of which Internet governance principles may be applicable to AI. He also highlighted potential benefits of AI in improving our ability to ingest, analyze, and summarize information.
3. Sandrine Elmi Hersi’s insights on how AI is fundamentally changing user interaction with Internet content prompted discussion about implications for governance.
Unresolved Issues and Future Directions
The discussion left several key issues unresolved:
1. Balancing innovation and risk mitigation in AI regulation
2. Extent to which Internet governance principles can or should be applied to AI governance
3. Ensuring AI systems enhance rather than restrict access to diverse online information
4. Approaches for global coordination on AI governance given differing national/regional priorities
Participants suggested developing a joint report for the next Internet Governance Forum on elements that can enable an open AI environment. They also proposed continuing collaboration between the Dynamic Coalition on Core Internet Values and Dynamic Coalition on Net Neutrality on AI governance issues.
Follow-up questions raised by participants highlighted areas for further exploration, including:
– Balancing regional diversity and harmonisation needs in AI governance
– Strengthening multi-stakeholder involvement in AI governance
– Regulating AI from the developer angle
– Incorporating feminist and diverse perspectives into core values for AI governance
– Developing international norms for liability and accountability in AI
– Regulating AI in specific verticals or sectors
– Ensuring transparency in complex AI systems like large language models
– Addressing potential loss of specific details in AI-generated content
– Approaching AI regulation in relation to internet infrastructure
In conclusion, the discussion highlighted the complex and multifaceted nature of AI governance, involving technical, legal, and human rights considerations. While there was agreement on the need for AI governance, developing a unified approach will require balancing multiple perspectives and priorities, with a strong emphasis on human rights, accountability, and the unique challenges posed by AI as a new intermediary layer in internet interactions.
Session Transcript
Luca Belli: So, let me just start to introduce the panelists and introduce a little bit of the team, and then I will give the floor to my friend and co-moderator, Olivier Crepin-Leblanc. So, our panelists of today, you can already see them on the screen, the remote panelists, and here with us, the on-site panelists, we will start with Renata Miele, who is advisor at the Ministry of Science and Technology of Brazil, and also the chairwoman of the CGI, the Comitê Gestoral Internat, the Brazilian Internet Steering Committee. Then we will have Sandrine Hersey, who leads the ARCEP, the French Regulators Unit on Internet Openness. Then we will have Sandra Mahanen. Is Sandra with us or not? Yes, so that works for the Unicom Group and for Omefe Technologies. Then we will have Mr. Winsor, that needs a few introductions. He is an internet pioneer and also chief internet evangelist, if I’m not mistaken, also vice president of Google. And then we will have Ik Chan, who is professor at the University of Beijing, and also leads work of the PNAE, the Policy Network on AI. And last but not least, we should have our friend Alejandro Pisanti somewhere. Sorry, there is also Vanda Munoz. Sorry. So, do we have Alejandro Pisanti online? I don’t see him. He’s online. I don’t know whether he is. is online but he’s not visible yet but we hope he will soon and then we will have also Vandamunas, she’s already online of course and then last but not least Anita Gurmundti from IT4Change. All right, now let me just introduce a little bit of the topic and why also we have combined these two session of the Dynamic Coalition on Core Internet Values and on Net Neutrality and Internet Openness because we had very similar proposals for this year’s IGF sessions to bring together individuals that have been working for in some cases decades on core internet values and at least one decade on Internet Openness, Net Neutrality and similar issues to discuss which kind of lessons can be learned from Internet Openness and Core Internet Values if any and can be transposed to the current discussions on AI Governance and AI Openness. We know very well that Internet and AI are two different beasts but there is a lot of things that can overlap and to do AI you need somehow Internet connections and Internet as a whole and a lot of things that happen on the Internet especially most applications nowadays rely on some sort of AI so there is a deep connection between the two but they are not exactly the same thing and many of the core Internet values or Internet Openness principles that we have been discussing for the past decade or so may apply or not. Some things may be more intuitive like transparency. Transparency of Internet traffic management, we have discussed this on Net Neutrality issues for a decade, is essential to understand to what extent the ISP is managing traffic and is this reasonable traffic management, is this blocking or throttling unduly some specific traffic or not? This is also essential to understand which kind of decisions are taken by this AI system we may rely upon for an enormously important part of our lives, from getting loans and credit in banks, to being identified and maybe arrested by police as criminals, or use our services with face recognition. So this kind of transparency, although different, be it in a rule-based system or in a very intensive system like LLMs that relies on a lot of computational capacity and are more predictive and probabilistic than deterministic, in both cases, we need to understand how they function. And this is not something that from a rule of law and due process perspective, we can accept to simply say, you know, we don’t know how it works. I understand that in many cases, we don’t know how it works, but this kind of transparency is essential for the counterpart of our accountability, accountability with users, with regulators, with society at large. And then we have a lot of debates about interoperability and permissionless innovation, which are the core of the internet functioning, but most of AI systems are not interoperable. And actually, the most advanced are also developed by a very small number of large corporations, which may lead us to the kind of concentration that non-discrimination and decentralization that are at the core of the internet and at the core of net neutrality and at the core of internet openness aim at avoiding. And to conclude also a very concrete example of how this concentration and the lack of net neutrality can even be put on steroid by AI. to some extent. We have very good examples now. We have been debating zero rating and its compatibility or incompatibility with NetNeutrality for the past decades almost. And we know that in most of the Global South, people access the internet primarily through meta family of apps, especially WhatsApp. And the fact that now WhatsApp includes at the very top of his homepage, meta AI, means that de facto most of the people in the Global South, they have as an internet experience, primarily WhatsApp, and will have as AI experience, primarily meta AI, period. That is the reality of most of the people which are sadly poor. And we only have that as an introduction to AI. And we also work to train for free that specific AI. So those are a lot of considerations we have to have in our mind to understand to what extent we can transpose internet openness principles, to what extent we can learn lessons from regulation that already exists, and to some extent may have failed over the past decades in terms of internet openness, and also which kind of governance solutions we can put forward in order to shape the evolving AI governance and the hopeful, maybe, AI openness. At this point, this would be the moment where my co-moderator, Olivier, would provide his introductory remarks. But I’m seeing him intensely speaking with our remote moderation team. So as the show must go on, I think we can start with the first speaker that we have on our agenda, that if I’m not mistaken, is Renata Mielli. I think, Olivier, do you want to give us your introductory story? Sorry, Renata. Olivier arrived again, and he is going to provide
Olivier Crépin-Leblond: his introductory remarks, and then we will pass the floor to Renata. Yes, apologies for this. Blanc speaking and I’ve been running back and forth trying to get all of our panelists remote panelists because we’ve got quite a few to have camera access, be able to be recognized by us and so on. So sorry for the running in and out but thank you for the introduction and it’s really great to have a meeting of both organizations, both dynamic coalitions together for a topic which is of such interest. I’m going to give just a quick few words about the core internet values because I’m not quite sure that everyone in the room knows about those. I see some new faces as well and this dynamic coalition started quite a while ago based on the premise that the internet works due to a certain number of internet fundamentals that allowed the internet to thrive and to become what it is today. And those are quite basic actually, they’re all technical in nature and so if I just look at them, the first one is the point that the internet is a global resource, it’s an open medium, open to all, it’s interoperable which means that every machine around the network is able to talk to other machines and I’m saying machines when we started it was every computer but now it is of course you’re speaking about all sorts of devices on this. It’s decentralized, there’s no overall central control of the internet, short of the single naming system, the DNS, apart from this there’s so many organizations that are involved in its coordination. It’s also end-to-end so it’s not application specific, you can put any type of application at the end and make it work with something at the other end so the actual features reside in the end nodes and not with a centralized control of a network and that makes it user-centric. End users have the choice of what services they want to access, how they want to access them and these days of course course, using mobile devices, they’re able to upload into their or download into their mobile devices, any type of application that they want to run. And they don’t really think about the internet running behind the scenes. And of course, the most important thing, it’s robust and reliable. It’s to think that there are so many people now on a network, which started with only a few thousand people, and then a few hundred thousand, and then a million and a few million. And some people back in the day thinking this is going to collapse. Well, it’s still working. And it’s still doing very well and very reliable considering the number of people and the amount of the number of people that are trying to break it, the amount of malware and everything else that is on this. So it’s pretty robust, pretty reliable. A few years ago, we also added another core value, which was that of safety. The right to be safe was one of the things that we felt was important to add as a core value. In other words, being able to allow for cyber measures to make sure the network itself doesn’t collapse and all sorts of ways to not content control as such, but to make sure that you are safe when you use the internet and you’re not going to be completely overblown by the amount of malware and everything that’s out there. And that’s something which I think we were quite successful in doing. All the antivirus software, all of the devices, all of the things that we now have on the net to make it work. These are very open values. And of course, they’re open for people to adopt. And of course, we have seen erosion of these over the past years. The openness of the network has been put to test on many occasions. There’s also been certainly, as far as network neutrality is concerned, some traffic shaping and some things affecting But on the whole, it’s still global, it’s still interoperable, it’s still got the basic values that we’ve just spoken about now. And whilst we are seeing an erosion, we’re also seeing that it’s quite well understood by players out there. And we’re looking at various different levels. So the telecommunication companies, governments, the operators, the content providers and so on, that we have this equilibrium, if you want. I can’t say a sweet spot because it keeps on moving forward, but this equilibrium today, and we hope that we will be able to continue having this equilibrium tomorrow that will make this internet both innovative, keep the innovation, but at the same time also make it as safe as possible and as stable as possible. Because that’s really something that now, with a network that is so important in everyone’s lives, we need to make sure we have for the future. The economic implications and societal implications to having a broken internet are too big for us not to do this. So hopefully that’s a message that’s been well understood. But now we have AI. AI has come up and seems to have made an absolute revolution. Forget everything else that’s happened before, we need to regulate, regulate, regulate. That’s what some are saying. I think today’s session is going to be looking at this. Do we need to regulate, regulate, regulate, or can we learn some lessons from the core internet values and how the internet has thrived to be what it is today and apply it to artificial intelligence? Well, let’s find out.
Luca Belli: Let’s start with our first panelist, Renata Miele from the Ministry of Science and Technology. Please, Renata, the floor is yours.
Renata Mielli: Hello. Thank you, Luca. Thank you, Olivier. It is a pleasure to be here discussing this interesting approach. about AI and Internet, I am very happy with this bridge that you bring to us to reflect about AI and Internet and the core values that we have to have in mind when we talk about this new and pervasive technology. So, thank you very much, my colleagues. I am going to bring some historical perspective and try to make this bridge between the core values that we have in Brazil to Internet into AI. Well, long-term analysis of Internet development shows that Internet mostly benefited from a core set of original values that drove its creation, such as openness, permission, innovation, interoperability and others. The Internet and its technologies historically fostered an interoperable environment guided by solid and universally accepted standards that enabled pervasiveness, shared best practices, collaboration and collective benefit from a unique network deployed worldwide. Just as other types of technology, the development of the Internet was also based in academic collaboration networks with researchers that worked together to deploy the initial stages of the global network. Artificial intelligence has been seen as a game-changer for a broad range of fields, from data science and news media to agriculture and related industries. In this sense, it is safe to start from the assumptions that AI will greatly impact society in several terms, economically, politically, environmentally, socially and many others. The harder challenge we have is to drive this evolution in such a way that it is possible to do it. that positive impacts superpass the negative ones with AI being used to empower people in society for a more inclusive and fair future for all, and the first step for that is to have a clear consensus on fundamental principles for AI development and governance. The Brazilian Internet Steering Committee, CGI.br, outlined the set of ten principles for the governance and use of the internet in Brazil. Our so-called Internet Decalogue provides core foundations for internet governance and is very much in line with the core internet values proposed by the IGF’s dynamic coalition, in a way that we believe can be leveraged to also meet expectations for the governance and development of AI systems. Principles such as standardization and interoperability are important for opening development processes, allowing for exchange and joint collaboration among various global stakeholders, strengthening research and development in the field. In the same sense, AI governance must be founded on human rights provisions, taking into account its multipurpose and cross-border applications. Principles such as innovation and democratic collaborative governance can be also considered as foundations for artificial intelligence in order to encourage the production of new technologies to promote stakeholder governance, with more transparency and inclusion through every related processes. Same goes for transparency, diversity, multilingual and inclusion, which can be interpreted in the context of AI systems development from the perspective that these technologies should be developed using technical and ethical safe, secure and trustworthy standards, curbing discrimination. discriminatory purposes. At the same time, the legal and regulatory environments hold particular relevance for the interpretation that the development and use of artificial intelligence systems should be based on effective laws and regulations in order to foster the collaborative nature and benefits of this technology while safeguarding basic rights. It should also be noted that adopting a principles-based approach ends up generating more general guidelines, which can lead to implementation difficulties. However, this should be balanced with the need for each country to adopt AI governance to its local realities and peculiarities, in addition to granting greater sovereignty over how this governance should take place in terms of politics, economics, and social development. As a bottom line, we could think of AI development and governance as a priority topic for a more intense south-to-south collaboration, fostering the creation and expansion of research and development, as well as open and responsible innovation networks with long-term cooperation agreements and technology transfer in order to corroborate sovereignty and solid development frameworks for the global south. It is important to not try reinvent the wheel and draw upon good practices that already exist, such as the global articulations across the IGF and WSIS processes, or even more stable sets of proposed frameworks, such as net mundial principles and guidelines, that can orient the evolution of the ecosystem to be even more inclusive on results-oriented. Last but not least, existing coalitions, political groups, and other stakeholders should be included in this process. could be leveraged as platforms for collaboration within digital governance and cooperation as a whole, including in traditional multilateral spaces such as the BRICS for D20. Brazil, for example, held the presidency of G20 in 2024 this year, and will do the same with BRICS in 2025. We believe that, in both cases, there will be good opportunities for foresting best practices in digital governance collaboration across different countries. Thank you very much.
Olivier Crépin-Leblond: Thank you very much, Renata. Wow, what a start. A lot of points being made here. I’ll ask that we all try and stick to our five minutes, because otherwise we’ll run over, we can speak for hours on these topics. But next is Anita Gurumurthy, IT4Change, and Abdelkader, are they ready?
Anita Gurumurthy: Yes, I’m here. Anita? Can you hear me?
Olivier Crépin-Leblond: She’s there, and she works, she can speak, yes? Perfect. Go ahead, Anita. Fantastic, we can hear you, yes.
Anita Gurumurthy: You can hear me, I hope. Yeah. All right. So, thank you very much. I just heard that from Renata, and also note this wonderful point that Mr. Vint Cerf has made, that the Internet is used to access AI applications, but operationally AI systems don’t need the Internet to function. I mean, I think we are making reference to the fact that, in many ways, algorithms predated the Internet or the Internet-based revolution. However, the fact of the matter is, just like the intimate relationship between time and space, or space and time, we have a relationship between the Internet and contemporary AI, which Mr. Cerf calls agentic AI. Allow me to be a little bit more specific. critical of openness itself, because I think when I open up my house, what I mean is everyone is welcome. But I think the ideas of the open Internet and open AI do not necessarily, you know, map on to this kind of sentiment. So the term open Internet is used very frequently, but it doesn’t have an universally accepted definition. And that is because, as all of us know, and none of us needs an introduction about the geoeconomics of the data paradigm here, we see that the data collection has become pivotal when we talk about the Internet paradigm. And it’s used either to target ads in large proportions or build products. And only a handful of players with the scale to meaningfully pull this off. So the result is a series of competing walled gardens, if you will. And they, of course, don’t look like the idealized Internet we started with. And today’s technology runs on a string, I would say, of closed networks, app stores, social networks, algorithmic feeds. And those networks have become far more powerful than the web, in large part by limiting what you can see and what you can distribute. So the basic promise of the Internet revolution, the scale, the possibility is, well, I mean, I would say it’s not plausible at this conjuncture. Alongside all of this, the possibilities of community and solidarity haven’t died. Thank God for that. Because we have the open source communities, there are open knowledge communities. And of course, all of these remain open and vulnerable, unfortunately, to capitalist cannibalization and to state authoritarianism. So that is a bit of bemoaning the state of the Internet. And all of this points to an important thing, which is that instead of an economic order, that could have leveraged the global Internet for the global commons of a data paradigm. We now have centralized data value creation by a handful of transnational platform companies, and we could actually have had, as Benkler pointed out long ago, different form of wealth creation. Now, I come to the openness in AI, and I’m cognizant of the five minutes that I have. I think it’s worthwhile to look at Irene Salomon’s analysis of AI labeled open. What is open AI? We’re actually talking about a long gradient, right? And everything is open. You can talk about open as if it’s one thing, but you could actually have something with very minimal transparency and reusability attributes. So that could also be open, and therefore open is not necessarily open to scrutiny. And the critiques that even others, other scholars like Meredith Whitaker, mount against this paradigm is that we don’t necessarily democratize or extend access when you talk about openness. And openness doesn’t necessarily lower costs for large AI systems at scale. Openness doesn’t necessarily contribute to scrutability, and it often allows for systemic exploitation of creators’ labor. So where do we go from here? I mean, it’s very sobering that GPT-4, for instance, when it was published by OpenAI, they explicitly declined to release details about its architecture, including model size, hardware, training, compute, data set construction, and training methods. So here we are. What we need to do is restore ideas of transparency, reusability, extensibility, and the idea of access to training data. And it’s politicized form. If we don’t do this, then we will be lost. And my last submission here is to be able to politicize each of these notions and make them part of ex-ante public participation, we need to turn to environmental law and look at the Aarhus Convention, for instance. We need a societal and collective rights approach to openness, whether it’s the open internet or open AI. And collective rights, societal rights that do not preclude individual rights or liability for harms cost to individuals. I’m not precluding that, but we still need to understand what will benefit society and what will harm society. So we are looking really at a societal framework for rights, which is super liberal, which doesn’t just always come back to my product cost you harm, but really looking at the ethics and values of societies, of sovereignty of the people, you know, as a collective. And here, I think we should understand that there are three cornerstones in substantive equality, the right to dignity and freedom from misrecognition, the right to meaningful participation in the AI paradigm, not just in a model, and the right to effective inclusion and the gains of AI innovation, which is for all countries and not just a couple. Thank you.
Luca Belli: Thank you very much, Anita, for this reality shock and for remembering us that actually, behind the label of openness or open, one has to look at the substance of things. And the very good example of open AI, whose practice and architecture is really antithetic to openness, despite having open in its own name, is something that allow us to think about the fact that if we want to have market players, a very large one, including and multi billion corporations stick to their promises, maybe some type of regulation is actually essential. And here is It’s very good to then pass the floor to Sandrine Elmiersi, because RCEP has been very vocal and leading AI openness over the past years of implementation since 2015 of the open internet regulation in Europe. So it’s very good to, based on the experience you had over the past decade, understand which kind of mistakes might have been done, which kind of limits may exist, and which kind of lessons we can learn to better shape openness of AI. Please, Sandrine, the floor is yours.
Sandrine Elmi Hersi: Thank you. First of all, let me thank the organizers of this session for this important conversation on how to incorporate internet fundamental values in the development of AI. So I will focus this introduction on the generative AI impact on the concept of internet openness. So as we know, generative AI is a versatile innovation with vast potential across many sectors, and more broadly, for the economy and society. These technologies also raise several legal, societal, and technical issues that are progressively being tackled. But we can see that policymakers, notably at EU level, at European Union level, have primarily focused their action and initiatives on the risks of these systems in terms of security and data protection, as seen in the EU AI Act. But the impact of these technologies on internet openness and the potential restrictions this application could bring on users’ capacity to access, share, and configure the content they do have access through the internet have only started to become a topic of attention in the public debate now. And yet, generative AI applications are becoming a new intermediary layer between users and Internet content, increasingly unavoidable. For example, a study published by Gartner this year predicts that search engine traffic could decline by 25% by 2026 due to the rise of AI chatbots. And aside from these conversation tools, generative AI applications and generative AI systems are increasingly being adopted by traditional digital services providers, including through well-established platforms like search engines, social media, and connected devices. From this perspective, we can say that generative AI could soon become an integral part of most users’ digital activities, potentially serving as a primary gateway for accessing content and information online. So thanks to their user-friendly interfaces, generative AI tools open up new possibilities to a wider range of users. With generative AI, it has never been easier to create text, images, or even code. However, we must also consider the challenges and risks in terms of Internet openness and users’ empowerment. So at RCEP, we have for a long time emphasized that Internet services providers, the main focus of EU open Internet regulation, are not the only players that may negatively impact Internet openness, understood as a right for end-users and users in general to access. and share freely the content of their choice on the Internet. In 2018, we published a report highlighting that, complementary to net neutrality obligations, the impact of search engines, operating systems, and other structuring platforms on devices and Internet openness should be tackled with appropriate regulatory replies. At EU level, the Digital Markets Act, adopted in 2022, has introduced new tools for promoting non-discrimination, transparency, and interoperability measures addressing some of the problems raised by gatekeepers. But for us, now it is the time to assess the impact of generative AI on Internet openness and users’ empowerment. This is why we have started to work on the issue and already sent first observations to the European Commission from our experience on net neutrality. And we can already see effects of generative AI applications on how users access and share content online. Just some examples, suppose the transition from search engines to response engines is not a neutral evolution and could restrict the user experience. As the interface of generative AI tools offers users little control and agency over the content they access to, providing an ad hoc single response with often a lack of transparency, no clear sources, and no ability for users to adjust the setting, we must also take into account the inherent technical limitations. of AI, including biases, lack of explicability, and risk of hallucinations that are now becoming part of the digital landscape. And generative AI development also brings fundamental changes to content creation, which could impact how information is shared and the diversity and richness of content available online, because as generative AI tools become primary gateways for content access, AI providers could also capture the essential part of the economic and symbolic value from content dissemination, which could threaten the capacity and willingness of traditional content providers such as media or digital commons to produce and share original content to the benefit of the economy and society. While these developments are concerning, at ARCEP we are convinced, as other persons around the table today, that we can create the conditions necessary to apply the presymposal of open internet to artificial intelligence in terms of transparency, user empowerment, and open innovation. For information, we will publish next year a set of recommendations in that perspective and are looking for partners to work towards this task. And to conclude, a final word, to say that we believe that we have the collective responsibility to shape the future of artificial intelligence governance in a way that secures its development as a common good. This means notably the adoption of high standards in terms of openness, but also pro-innovation, sustainability, and safety. So thank you again for the opportunity to be here and I look forward to the discussion ahead.
Olivier Crépin-Leblond: Thank you very much Sandrine and thank you for sharing the perspective of a regulator. We now have a perspective from a business community and that’s Sandra, data scientist analyst at Unicorn Group of Companies. Sandrine, you should be able to unmute and take over. Sandra? We cannot hear Sandra. Can we check why we cannot hear Sandra? Sandra, can you try to speak again? No, we can’t. We’re not hearing Sandra online either. We’re not hearing her online. Should we maybe… Sandra, can you do a last attempt? Yes, can you try to speak? There’s a problem with her mic. Yes, there is a problem with… While we try to solve this in the interest of time, let’s move ahead to the next speaker and then we will have Sandra maybe later.
Vint Cerf: So, Vint Cerf, please, the floor is yours. Well, thank you very much for asking me to join you today. This is a very, very interesting topic. I will say that AI and Internet are not the same thing. And I think that the standardization which has made the Internet so useful may not be applicable to artificial intelligence, at least not yet. What you see is extremely complex and large models that operate very differently from each other. And the real intellectual property is a combination of the training that we have. material, and the architecture and weights of each of the models that are being generated, and those are being treated largely as proprietary. So open access to an AI system is not the same as access to its insides and its training material and its detailed weights and structure. So we should be a little careful not to try to equate the things that make the internet useful and try to force them onto artificial intelligence implementations. I don’t think we’re at a place where standardization is our friend yet. The one place where standardization might help a lot for generative AI and agentic AI would be standard ways, semantic ways of interacting between these models. Humans have enough trouble with speaking to each other in language, which turns out to be ambiguous. I do worry about agents using natural language as a way of communicating and running into the same problem that humans have, which is ambiguity and confusion and possibly bad outcomes. Generally speaking, if we’re going to ask ourselves whether we should regulate AI in some way, I would suggest, at least in the early days, that we look at applications and the risks that they pose for the users. And so the focus of attention should be on safety and for those who are providing AI applications to show that they have protected the users from potential hazard. I will also, I feel strongly that there is a subtle risk in our use of generative AI. Those of you who know how these things work know that the large language models essentially compress large quantities of text into a complex statistical model. The consequence of that is that some details often get lost. get lost. We have a subtle risk in our use of large language models where we may lose specific details even though the generative output looks extremely convincing and persuasive because of the way it’s produced. I wonder whether we will end up with blurry details as a result of filtering our access to knowledge through these large language models. I would worry about that. I guess the last thing I would say is that accountability is as important in the AI world as I think it is in the internet world. We need to hold parties accountable for potentially hazardous behavior. The same is true for parties offering AI-based applications. Everybody should be accountable for any risks that these systems pose. My guess also is that we should introduce into the training material better provenance so that we know where the material came from. This would be beneficial for websites in general, knowing where the content came from, so that we can assess its utility and accuracy. I’ll stop there and thank you for your opportunity to intervene.
Olivier Crépin-Leblond: Thank you very much, Vint. Since we are a bit pressed for time, we’ll go straight over to Yik-Chan Shin while we’re working out with Sandra how to get her mic working. Yik-Chan, you have the floor.
Yik Chan Chin: Thank you for inviting me. From the P&I perspective, because as you may know that we have a policy at the IGF, we have an intersection called the policy level on artificial intelligence. So we did a report on the interoperability, liability, sustainability, and labor issues. So first of all, I think Well, I do agree with Wing-Chief in terms of the infrastructure of AI. Actually, it’s quite different from the internet because actually, AI system and the users are supported and connected via the internet, but they’re quite different because AI, including algorithms, data, and the computing power, technically, none of them must be unified or standardized, I mean, technically, okay? Also, according to our past experience, the interoperability of AI is an extremely complex issue, so we have been working on the topic for two years, the last two years, but it’s really extremely difficult issues. So, before I go into detail about the interoperability, I think there’s two things, two principles worth paying special attention to, because when I look at the question, you talk about permissive innovation versus precautionary principle, so when we talk about open internet, so basically, we talk about permissive innovations, and I’m not sure whether this principle should apply, should be applied to the AI regulation because AI is extremely compressed, and there’s some features, you need the features of AI, for example, compressive, and predictable, and also, it’s kind of autonomous behave, so all these particular features make the AI quite harmful if there’s a risk, so whether we should allow this permissive innovation approach, which is applied to the open internet because we know the history of internet is in the beginning self-governed, but should we allow this to apply to the AI system? I think we should be very cautious about that, but on the other hand, we do see there’s some overlapping principles. between the AI regulation, internet regulation, for example, so I think those values should be applicable to both systems, for example, like a human-centered system, okay, both AI and internet, inclusively, universality, transparency, accountability, robustness, and the safety and naturalities, and the capacity building, so all these values actually applicable to both systems. So in terms of the interoperability, which is my area, so I would like to say something particularly focused on interoperability. So first of all, what is the interoperability? Interoperability, basically, we’re talking about the capacity of the AI system in terms of machine, machine, they can talk to each other, communicate with each other smoothly, so including not only machine but also the regulatory policy, you know, so they can communicate and work together smoothly, but this doesn’t mean we have to, I mean, the regulation or the standard has to be harmonized, because we can have a different mechanism to accommodate interoperability, for example, we can have a compatibility mechanisms to accommodate the interoperability. So therefore, you know, first of all, I think interoperability are crucial for the openness of the internet and AI system, but AI system can be diverged as well as convergent, as I said, as I just explained before, because the system is quite different, it’s not necessarily to be unified, so therefore, first of all, we need to figure out what area of AI system or even AI regulations or governance has to be interoperable, what areas can be allowed divergent, you know, and respect the regional diversity, so from the pin… So, in our report, we identified several areas which could be addressed at a global level, which need to have the interoperable framework. For example, like the risk, the AI model’s risk categorization and the evaluation, because we see there’s a different approach to categorize the AI risk and evaluate. So you have a EU approach, and China has a Chinese approach. So even America, the US just released their global standardization framework mechanisms. So we need to have a kind of interoperable framework in terms of AI risk categorization and evaluation. And the second is about liability. So we have a huge debate just over there about the liability of AI system, and who should take responsibility, and what kind of responsibilities, criminal or civil. So we haven’t had a global framework, even a national framework, because there’s still debate at the EU level and the national level. So the liability of the AI model is another area that could be addressed at the global level. The third one, I think we just mentioned about the training data set. So all these issues can be addressed at the global level. And so the other things, and the last thing I want to mention is about how do we balance the regional diversity and the harmonization needs. So we need to respect the regional diversity in AI governments, but at the same time, can establish compatibility mechanisms to reconcile the divergence in the regulations. So there’s different mechanisms we can use, but there’s a context dependent, and so case by case. I think, is my time up? So last things I want to say is about the area we had to improve is a weak regime capacity of international institutions. you coordinate the alliance. So that’s a kind of concept we call the weak regime capacity of the international institutions, which means we have a lot of the international institutes like ITU, IEEE, and the UN, but how can we?
Luca Belli: Can I ask you just to wrap up in one minute so that the others also have time to speak? Thank you.
Yik Chan Chin: Yeah, so the last thing is that we need to have some kind of the global institution which can coordinate different initiatives at the national level, regional level, and the international level in terms of the openness of the AI and the openness of the internet. So the GDC have not provided a concrete solution in terms of how to strengthen the multi-stakeholder in the AI governance, but leave to the WISC-20 to decide. So I think we should address this in the WISC-20 debate. I think I stop there, thank you.
Luca Belli: Fantastic, let’s try to see if now Sandra can be audible. Sandra, can you try to speak so that we can check? We are not hearing you, can you try again?
Sandra Mahannan: How about now?
Luca Belli: Yes, now, yes, perfect. Keep the mic, close your mouth, please. Thank you very much. Thank you.
Sandra Mahannan: Thank you. Sorry for the whole mix up, and once again, thank you so much for the opportunity to be here.
Luca Belli: I’ll try to speak very short. If you can keep the mic very close to your mouth because literally we can hear very well when it’s close to your mouth, and not at all when it is five centimeters from your mouth. Thank you very much. We cannot hear you, Sandra, I’m sorry.
Sandra Mahannan: What’s the issue?
Luca Belli: Sandra, unfortunately, we keep on not being, I think it’s a mic problem. So, if you have another microphone where you are, I suggest you try to change it while we go to the next speaker, Alejandro Pisanti, because we are not able to hear you in this moment, Sandra. So, Alejandro Pisanti, very old friend, not because he is old at all, but because we know each other since a lot of years. So, please, Alejandro, the floor is yours.
Alejandro Pisanty: Thank you. Can you hear me well?
Luca Belli: Yes, very well.
Alejandro Pisanty: Thank you. Thank you, Luca and Olivia, for the yeoman’s work you did to put this session together, and to all members of the Dynamic Coalition for the Exchange of Ideas. I salute also friends I see on screen. I think I see Martin Botterman, I think I see Edesire Misolewicz and Harald Lalvestand, probably getting that right. So, briefly, because I’m going to be mostly responding as well as putting forward what I have prepared, the Dynamic Coalition was created to see, try to follow on these core values, which if you take them away, you don’t have the Internet anymore. If you take away openness, for example, you have an Internet. If you take away interoperability, you have a single vendor network and so forth. And that’s what we are trying to now extend to, or I say to challenge how much we can extend it to AI. We have to be very careful what we call AI. In people’s mind are the generative AI systems that start from text and can give you either more text in a conversational interface, or can give you images, video and audio. But artificial intelligence is a lot more things. It’s molecular modeling, it’s weather forecasting, it’s every use of artificial intelligence that we use for basically three purposes, which is finding patterns in otherwise apparently chaotic information. finding exceptions to information that appears to be completely patterned, and extrapolating from these. And we know that extrapolating from algebra, extrapolating from things that you only calibrate for interpolation, is always going to be risky. So that’s, you know, our basic explanation and concern for hallucinations in LLM systems. We have, second, one of the lessons we’ve learned for many years from this dynamic coalition, is to separate the effects of the human and the technology. To separate the effects of human agency, human intention. Cybercrime doesn’t happen and wasn’t invented by the internet. It happens because people want to take your money or want to hurt you in some ways, and now use the internet as they previously used, fax, post, or just try to cheat you face to face. Same for many other undesirable conducts. So we have to separate what people are wanting to do and how technology modifies it, for example, by amplifying it, by enabling anonymity, by crossing borders, and so forth. And same for AI. It’s not AI that is doing misinformation. We have had misinformation, I think, since at least the Babylonians, and probably even before we had written language. But now we have very subtle and easy to apply ways to apply misinformation at a large scale. But we still have to look at the source and the intention of the people who are creating and providing this misinformation, and not try to regulate the technology instead of regulating the behavior, or educating the users to avoid them falling for misinformation. Second large point here, third large point here, is not trying to regulate artificial intelligence in general, in total, but being sure that you are not, by trying to regulate what you don’t like about LLMs doing misinformation, you don’t kill them. your country’s ability to join the molecular modeling revolution for pharmaceutics, for example. There’s a recent paper by Akash Kapoor, which I think is very valuable for this session, which speaks of leaving behind the concept of digital sovereignty and replacing it with a digital agency. Luca and I were in a meeting two weeks ago in New America, in D.C., where this concept was put forward, and it’s a very powerful one, because what I extract from it is instead of trying to be sovereign by closing borders and putting tons of rules, which are basically copied from rules from the countries which actually developed the technology, and based on the fears, it’s trying to be powerful, even if you have to sacrifice some sovereignty in the sense that you have to collaborate with other countries, you have to collaborate with another academic institution and so forth, which, by the way, has always been the way of developing technology and academic research. There’s a recent French paper, it came to my attention only yesterday, which speaks about de-demonizing, stop demonizing artificial intelligence, without, of course, becoming confident or overconfident, but try to regulate and to promote AI. If your country is looking, legislators are looking to regulate AI and are not putting a lot of money into research and development and into, let’s say, like Denmark or Japan have done recently, or Italy, putting together a major computing facility for everybody to use to develop AI, they are lying to you, they are cheating you, because they are actually closing the door to the effects of innovation and condemning you to actually getting this only from outside the country in the end, in subtle and uncontrolled ways. How do we bring multi-stakeholder governance, which is another lesson from our dynamic coalition to artificial intelligence, we have to find a way. maybe to scare the companies with the fear of harder regulation, to come together with other stakeholders like academia, like the users, like rights holding organizations, and so forth, as we did with, for example, the domain name market with ICANN 25 years ago. It’s not doing an ICANN again, necessarily, but it’s extracting the lessons of how you bring these very diverse stakeholders together to a kind of core regulation designed for this type of system and risks that are present in reality, and not only the imaginary ones. There has also been some talk about open sourcing, which is very valuable. The risks have already been mentioned, and one risk that has not been mentioned that we’ll learn from this history of open source software is derelicts, software that is abandoned, systems that are abandoned, that are not maintained anymore, which are very risky because defects can creep in and never be fixed. And then these things become part of the infrastructure of the internet. We already have seen some major security events, for example, happening from unmaintained open source software, which was at the core of different systems. So the challenge here will be to avoid the delusion of a one world government. We don’t need the GDC. We don’t need a UN artificial intelligence agency. We need to look more at the federated approach. And I think that this will be more approachable, more available. There’s a better path to it. If we do, as for example, the UK has been doing, go by the verticals, go by the center specific type of regulation, which we already have, use all the tools society already has, like liability for commercial products, liability for public officials who purchase systems which work badly. It’s as bad to purchase a system that does biased or discriminatory assignment of funds in a social security system, as it’s bad to purchase cars that end up killing people in crashes because you don’t have airbags and that would be
Luca Belli: thank you fantastic thank you very much alejandro i was going to remind you to wrap up but you already did it yourself fantastic so uh let’s try to see if now sandra has a new mic that can work let’s try to see give a last shot at sandra’s presentation sandra can you hear us yes can we can hear you we can hear you very well very well excellent please go ahead thank you um
Sandra Mahannan: i’m so sorry about the mix up with my mic and all um i’m going to try to keep this very short um so i would want to come in from the angle of um the business angle so to speak so um i work with unicorn group of companies and it’s an ai and robotics company so um i read one time that ai often reflects the biases of its creators right so um we all know that um ai response quality is a very huge concern it’s a huge concern um because we have cultural biases religious biases um recently i was in in a religious gathering where religious leaders were trying to discuss um uh the adoption of ai and the you know um the response the concerning responses that ai gives um the erroneous responses and all of that and we all know that um air responses are heavily dependent on the data quality fed into um the model right and um uh yeah so um the the the the acquisition of such data is usually not um not cheap it’s very expensive we talk about computing power we talk about um acquisition of data these are very expensive processes. So my tip would be to regulate AI, not really from the user angle, but from the openness should come from the aspect of the developer, the development angle, where we talk about data quality, data privacy, security, data sharing protocols, operating in the market as an entity, interoperability, and yeah, and all of that. Yeah, that would be my take on it. Thank you.
Luca Belli: Fantastic. Thank you very much, Sandra, for your perspective and for being so fast. Do we want to have our last speaker last but not least, of course,
Olivier Crépin-Leblond: I think we’ll have a Wanda Munoz, who is a member of the feminist AI research network in UNESCO’s Women into Ethical AI platform. So over to you, Wanda.
Wanda Muñoz: Thank you so much. Can you hear me?
Olivier Crépin-Leblond: Absolutely.
Wanda Muñoz: Okay, okay. Okay. Well, thank you so much. I’m delighted to be here. Thanks for the to the organizers for having me and thanks Alejandro for recommending me to be here. I would like to take a somewhat different perspective from what has been shared so far, because what I like to put on the table today is my perspective as someone who comes from policymaking and from human rights implementation. So my contributions come from this perspective. And I will also build on the results of a report from the Global Partnership on AI called Towards Substantive Equality in AI that I invite you all to review and for which we counted with the amazing leadership of Anita. So I take the opportunity to thank her again for her contributions and to say that I fully agree with her intervention. So I will start by sharing a few thoughts on the issue of values itself and then I will move to human rights. First, I like to share that I think the core values of Internet governance have been very useful to have a common understanding of the Internet that we want and that serves the majority, but arriving to the discussion when these values were already adopted and implemented for a while, I want to put it on the table that maybe we could benefit from analyzing these values from a gender and diversity perspective. And I think there’s already a wealth of research from feminist AI scholarship in this regard. For instance, just to mention an example of the six core principles of feminist data visualization, I don’t know if you are familiar with it, but I invite you to look for them, which propose values such as rethinking binaries, embracing pluralism, examining power and aspiring to empowerment. And these are quite different from the Internet core set of values today, but also complementary. And I think what I like from this other set of values is that they question social constructs, power and distribution of resources. And these issues to me are inextricably linked both to Internet and to AI governance, but still they are often left out of mainstream discussions. So that being said, I’ll move to human rights. And here what I like to say first is that I like to give you some, a couple of ideas of how a few of us insist that human rights should be at the front and center of any discussion on AI governance, at least on the same standing and ethics, principles and values. I think maybe for some of you see it differently, but human rights are not just words. Human rights are actions, policies, budgets, indicators and accountability mechanisms, which were already mentioned by Renata and Anita before. So in the context of artificial intelligence, human rights allow us to reframe the discussion on AI in different terms and to ask different questions. So let me give you three examples. Although saying that we must mitigate the risk of AI, what we would say from a human rights perspective is that when AI harms occurs, it systematically results in violation. of human rights that disproportionately affect women, racialized persons, indigenous groups, and migrants, among others. And I’m sure you know of the many docents or more documented examples of these that have affected the right to employment, to health insurance, to social services, and many others, which you can find, for instance, in the OECD AI Incidents Monitor. Another example, instead of saying that in AI governance, we should balance risk and innovation, if we acknowledge that the benefits of innovation generally benefit a privileged few, and the brunt of the harm is primarily for those already marginalized, from a human rights perspective, we would talk about the need for AI regulation to ensure accountability, remedy, and reparation when human, when violations of human rights results from the use of AI. And I want to tell you that of this research we carried out for GPA, where we consulted more than 200 people from all walks of life, backgrounds in five continents. This is possibly the number one demand that was documented in the report. And I also want to say I appreciated Jake’s perspective on the need for international norms, specifically regarding liability and accountability. Another example is regarding non-discrimination, that I think generally speaking, people understand non-discrimination as saying, I don’t go out and say slurs to people in the street, right? So I don’t discriminate. But from a human rights perspective, this is far from enough. What non-discrimination means is that you must take positive actions to avoid and to redress discrimination that already systematically exists in our organizations, in our data, in our policies. And this is particularly the case in internet and in artificial intelligence. So in a human rights framework, unless we take action, we are effectively perpetuating discrimination. Similarly, we could have a discussion about what a general notion of safety means, but unless we adopt specific actions to ensure the safety. In the context of the vulnerabilities of specific groups, in each specific context, we will keep excluding those who are already more marginalized. And here, Alejandro, as often, I want to respectfully disagree with you when you talk about the need not to demonize technology, because I hear this once and again, and I really don’t think it’s a helpful term that is often thrown at those of us who are pointing out the risks and harms of AI. So I think we are doing this from the documented impacts and from the evidence, and trying to raise the alarm to at least start bringing into reality, you know, into the reality of what AI is causing, in addition, I mean, to contrast with what we see most of the time, which is this AI hype. And of course, I think that for all the problems that the UN has, we don’t need it to lead an effort on AI regulation, if we want to have at least some equality in terms of negotiation. This leads to larger issues with multilateralism that I hope we could discuss another time. So just to conclude, I think when we speak about AI governance, what is at stake has the potential to change the core of how our societies function. So I fully agree with Anita on the need for a societal and collective rights approach, in addition to a human rights one. And to me, this cannot happen without regulation. So thank you.
Luca Belli: Fantastic. Thank you very much, Vanda, for bringing these really very good and intense, thought-provoking points. I think this is a very good way to open our discussion now with the floor. We have a good 20 minutes to speak. Also, let me share with the floor that one of the intentions that we had when we started designing the organization of this session was to try to distill some core elements maybe that we could put into a report, a joint report for next IGF. We know very well that in six months, before next IGF, there will be very few things that we could do in terms of outcome, but some very joint paper on what could be the elements for a… in an open AI ecosystem or something like that. So if you have any ideas, if you can help us, guide us, identify what could be these core elements, or if you have any other reflections on what has been said, we have 20 good minutes to discuss this. Feel free to be punchy, provocative, while of course diplomatic and respectful. Just raise your hand and our mic will be.
Olivier Crépin-Leblond: And Luca, if I could add, there are sometimes some panels at IGFs where everyone agrees with each other. And I was really pleased to see various viewpoints and some panelists not agreeing with each other. So that’s really good. And by the way, if you all, as panelists also have points to make about each other’s interventions and please go ahead. Now, if you’re online, you can put your hand up in the Zoom and we’ll be seeing this. And of course, if you’re in the room, then put your hand up and mic will fly in your direction or maybe be brought over to your direction. Does anyone who wish to fire off?
Luca Belli: Who wants to start our collective exercise?
Olivier Crépin-Leblond: It’s a lot to digest.
Luca Belli: Yes, I see. Are you, Desiree, are you stretching or raising your hand? Okay, let’s break the silence with Desiree.
Audience: Hi, Desiree. The question, yeah, thank you all for your very rich intervention. There’s a lot to take in. And what I, although I don’t know the title of the session, whether really focusing on the core principles of AI, this is a dynamic coalition working group on core principles of the internet. So I’m really, really glad to see the differentiation that the AI is not the internet confirmed by some of our panelists. And then we also had heard that the AI is building this intermediary layer like a user interface between these structures. So I think it’s important to see the AI just as something being built on top of the existing infrastructure. And my concern is really that we will end up with the internet that is even fuller of deep fakes and disinformation at this current stage. And that trying to have a sustainable internet where we need to be really. careful about the capacity that we have in the society in running the Internet and getting bits of information through the network, should the layers of the network, you know, be looked separate and protected. And what I think I’m hearing, and I’d like to have a confirmation, is that AI being built on the top should really be regulated as the AI layer and like not to go deep down into the Internet infrastructure as such. But then there are arguments that some networking parts will be using AI as well. And so how do we see this, you know, regulation being played out? And what is the core principle here? Is it still net neutrality, that we’re stupid about the bits that go through the network? It just raised a lot of, you know, questions in my mind.
Luca Belli: Yeah, thank you, Desiree. We have Anita online who wants to react. And then we’ve also got Vint Cerf having put his hand up. So let’s take Anita and Vint’s reactions and then we go around the floor again.
Anita Gurumurthy: I must apologize that I wasn’t responding to the point from the floor because I wanted to come in earlier. So is that okay that I go in now?
Luca Belli: Yes, please go ahead. And then we have Vint and then Renate.
Anita Gurumurthy: Yeah, it’s a minor point. It may be linked to what was just observed. I think that when you talk about the Internet and the innumerable struggles in our own regulatory landscape, and I recall my organization, and it’s, you know, the good fight we put up for net neutrality in relation to our telecommunications authority, the way the idea of non-discrimination in hers in the network is very, very different, I think, when it comes to artificial intelligence. I think AI is primarily linked to truth conditions of a society, and you’re really not necessarily prioritizing non-discrimination. I think that’s somewhat of a technicalized representation of data and AI debates. And what we’re actually doing is using discrimination and social cognition in a manner that you can use data for social transformation. So there is a certain slippage there very often, and in fact, in our joint work with Wanda, we actually said that we might sometimes have to do affirmative action through data. So we really have to be cautious about conflating non-discrimination on the internet with principles for responsible AI.
Olivier Crépin-Leblond: Thank you. Vint?
Vint Cerf: I’m literally just thinking on the fly here about AI as another layer, as the interface into this vast information space we call the internet. First of all, Alejandro’s point that machine learning, it covers a great deal more than large language models, is mentioned of weather prediction, for example, resonates with me because we recently discovered at Google that we can do a better job predicting weather using machine learning models rather than using Navier-Stokes equations. But I think that we should be thoughtful about the role that machine learning and large language models might play. One possibility is that they filter information in a way that gives us less value. That would be terrible. But another alternative is that it helps us ask better questions of the search engines than we can compose ourselves. We have a little experience with this through the use of what’s called a knowledge graph that helps expand queries into the index of the World Wide Web and then pull data back. Summarization could lose information, that’s a potential hazard, but I think we should be careful not to discard the utility that these large language models might have in improving our ability to ingest, analyze, and summarize information. So, this is an enormous canvas, which is mostly blank right now, and we’re going to be exploring, I’m sure, for the next several decades.
Olivier Crépin-Leblond: Renata?
Renata Mielli: Just a point. Of course, AI and the Internet are not the same thing, they are different. But, in my point of view, we have some challenges that we are facing in regards to how to address the risks of AI and the impacts on society are pretty much the same in terms of the need for more transparency, accountability, diversity, more decentralized and democratic. Not only Internet, but AI. And we need to focus also on how AI is impacting the Internet and how people interact with the Internet. Now, we are in a situation that, for example, when you do a search on Google, you don’t have a lot of links to click and interact with the content of the link about something. How to cook a cake, orange cake, for example. Because the artificial intelligence brings the results and you don’t need to click anymore. And, in a lot of times… the results are not accuracy and have bias, and this is impacting the internet, how we experience it. So there is not the same thing, but one impact other, and we have to have in mind that the core values we need to have to regulate the AI in this actually moment needs to be into account this transparency, this accountability, liability, and so on, that we need neutrality and other core values that we have to internet.
Yik Chan Chin: Yes, I think in terms of internet governance, we have these core infrastructures, which we all recognize it has to be public good, even global public good. So this is no problem. So we are continually regulate separately as a core infrastructures, but AI system, I think is more on the application of life, but there’s an issue I think many people already touched, especially in terms, which is cybersecurity, because the AI actually put a lot of the, because it caused a lot of the problem in terms of cybersecurity, because it make the internet more vulnerable towards the cyber attacks. So that is one areas, just like my colleague said, they have a mutual impact, especially in terms of cybersecurity, in terms of the AI may help enlarge the dangerous harms towards the internet stability. I think that is one area we have to put focus, but there’s other impact, which needs to take a long observation on the impact of the AI. on the internet and the core infrastructure of the internet.
Luca Belli: Let’s get to Sandra and unless there is anyone else with an urgent comment or question, we can then wrap up. Sandra, please. Can you speak again? Yes, very well.
Sandra Mahannan: I just wanted to quickly react to what I think two speakers ago, she made mention of a concern about having erroneous responses because some way somehow AI just summarizes the feedback from searches, which was one of, I think I totally agree with her, which was one of the points I made earlier that these biases whether we like it or not, AI is here to stay. Then these biases are really concerning because we would agree that there are really big wigs in the business and they get to I don’t want to say control the narrative, but for lack of a better way of expressing it. Then the small players, no matter how accurate they are, don’t get to really, I don’t know access is really low because the big wigs have occupied the market, which means that responses automatically or people automatically go there and then what happens when the decentralization is not really happening, when it’s not really decentralized, when people are not really getting to, as other players in the industry are not really getting to People are not really having access to those other players, and that is why it is really important, I think, that regulation should really come heavy on the side of the development or the developers or the models themselves.
Luca Belli: I think that now, as we will be kicked out of the room in two minutes, it’s time to wrap up and to thank the participants for their very thought-provoking comments and presentations. I think we have illustrated very well the complexity of this issue and also the interest of then trying to keep on having this very productive joint venture for next IGF to present the result of what could be a very brief report on the elements that can enable an open AI environment, as it was suggested also in the chat during the session, our best effort to distill the knowledge shared of an hour and a half. I think the mics are giving up on us. Try this one. It’s a sign that we have to wrap up. Thank you very much, everyone, and we will do our best effort to consolidate everything we have learned today into this report. Thank you very much.
Olivier Crépin-Leblond: And I should just add, if anybody is interested in joining both the Anime Coalition of Network Neutrality and the one on Co-Internet Values, then come to talk to us and we’ll take your email address and name and we’ll be very happy to have you on. And thanks again to all of our panelists and really great job. So thank you so much.
Alejandro Pisanty: Thank you again and congratulations for the session.
Wanda Muñoz: Thank you.
Vint Cerf
Speech speed
139 words per minute
Speech length
790 words
Speech time
338 seconds
AI and Internet are fundamentally different technologies requiring distinct governance approaches
Explanation
Vint Cerf emphasizes that AI and the Internet are not the same thing and should not be governed in the same way. He suggests that the standardization which has made the Internet useful may not be applicable to artificial intelligence, at least not yet.
Evidence
Cerf points out that AI systems are extremely complex and large models that operate very differently from each other, with proprietary training data and architectures.
Major Discussion Point
Differences and similarities between Internet and AI governance
Agreed with
Luca Belli
Renata Mielli
Agreed on
AI and Internet are distinct technologies requiring different governance approaches
Differed with
Luca Belli
Renata Mielli
Differed on
Applicability of Internet governance principles to AI
AI systems are mostly proprietary and not interoperable, unlike Internet protocols
Explanation
Vint Cerf highlights that AI systems, unlike Internet protocols, are largely proprietary and lack interoperability. He points out that the intellectual property in AI systems lies in their training data, architecture, and weights, which are often kept secret.
Major Discussion Point
Openness and interoperability in AI systems
Differed with
Anita Gurumurthy
Differed on
Openness in AI systems
AI governance should focus on regulating applications and risks, not the technology itself
Explanation
Vint Cerf suggests that AI governance should concentrate on regulating specific applications and their associated risks, rather than attempting to regulate AI technology as a whole. He emphasizes the importance of focusing on safety and protecting users from potential hazards.
Major Discussion Point
Regulation and governance approaches for AI
Luca Belli
Speech speed
144 words per minute
Speech length
2057 words
Speech time
854 seconds
Some core Internet values like transparency and accountability can apply to AI governance
Explanation
Luca Belli suggests that certain principles from Internet governance, such as transparency and accountability, could be relevant to AI governance. He argues that these principles are essential for understanding how AI systems function and for ensuring accountability to users, regulators, and society.
Evidence
Belli gives examples of how transparency is crucial in both Internet traffic management and AI decision-making processes, such as in credit scoring or facial recognition systems.
Major Discussion Point
Differences and similarities between Internet and AI governance
Agreed with
Vint Cerf
Renata Mielli
Agreed on
AI and Internet are distinct technologies requiring different governance approaches
Differed with
Vint Cerf
Renata Mielli
Differed on
Applicability of Internet governance principles to AI
Unknown speaker
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
AI is building an intermediary layer on top of Internet infrastructure
Explanation
The speaker suggests that AI is creating a new layer between users and Internet content. This intermediary layer is becoming increasingly unavoidable and could significantly change how users access and interact with online information.
Evidence
The speaker cites a Gartner study predicting that search engine traffic could decline by 25% by 2026 due to the rise of AI chatbots.
Major Discussion Point
Differences and similarities between Internet and AI governance
Agreed with
Sandrine Elmi Hersi
Agreed on
AI is creating a new layer between users and Internet content
Renata Mielli
Speech speed
116 words per minute
Speech length
1058 words
Speech time
544 seconds
AI and Internet governance face similar challenges around transparency, accountability and decentralization
Explanation
Renata Mielli argues that while AI and the Internet are different, they face similar governance challenges. She emphasizes the need for transparency, accountability, and decentralization in both domains.
Evidence
Mielli gives an example of how AI is impacting Internet search results, where users now often get direct answers from AI instead of links to various sources, potentially reducing transparency and diversity of information.
Major Discussion Point
Differences and similarities between Internet and AI governance
Agreed with
Vint Cerf
Luca Belli
Agreed on
AI and Internet are distinct technologies requiring different governance approaches
Differed with
Vint Cerf
Luca Belli
Differed on
Applicability of Internet governance principles to AI
Anita Gurumurthy
Speech speed
144 words per minute
Speech length
1108 words
Speech time
460 seconds
Openness in AI is complex and doesn’t necessarily lead to transparency or democratization
Explanation
Anita Gurumurthy argues that the concept of openness in AI is not straightforward and does not automatically result in transparency or democratization. She suggests that openness in AI can have a wide range of meanings and implementations, not all of which contribute to scrutability or accessibility.
Evidence
Gurumurthy cites the example of GPT-4, where OpenAI declined to release details about its architecture, training data, and methods, despite being labeled as ‘open’.
Major Discussion Point
Openness and interoperability in AI systems
Differed with
Vint Cerf
Differed on
Openness in AI systems
Yik Chan Chin
Speech speed
138 words per minute
Speech length
1174 words
Speech time
510 seconds
Standardization and interoperability of AI systems is extremely difficult currently
Explanation
Yik Chan Chin points out that achieving standardization and interoperability in AI systems is currently very challenging. This is due to the complex and diverse nature of AI technologies and applications.
Major Discussion Point
Openness and interoperability in AI systems
Global coordination is needed on AI risk categorization, liability frameworks, and training data standards
Explanation
Yik Chan Chin argues for the need for global coordination in several key areas of AI governance. This includes developing consistent approaches to categorizing AI risks, establishing liability frameworks, and setting standards for training data.
Major Discussion Point
Regulation and governance approaches for AI
AI poses new cybersecurity risks to Internet infrastructure
Explanation
Yik Chan Chin points out that AI technologies introduce new cybersecurity challenges to Internet infrastructure. She argues that AI can make the Internet more vulnerable to cyber attacks.
Major Discussion Point
Impacts and risks of AI systems
Wanda Muñoz
Speech speed
173 words per minute
Speech length
1106 words
Speech time
383 seconds
A human rights-based approach is needed for AI governance, beyond just ethics and principles
Explanation
Wanda Muñoz argues for a human rights-based approach to AI governance, going beyond ethics and principles. She emphasizes that human rights provide a framework for concrete actions, policies, budgets, indicators, and accountability mechanisms.
Evidence
Muñoz gives examples of how a human rights perspective can reframe AI governance discussions, such as focusing on systematic human rights violations resulting from AI harms and the need for accountability, remedy, and reparation.
Major Discussion Point
Regulation and governance approaches for AI
AI systems can perpetuate and amplify existing societal biases and discrimination
Explanation
Wanda Muñoz points out that AI systems can reinforce and exacerbate existing societal biases and discrimination. She argues that unless specific actions are taken, AI will continue to perpetuate these issues.
Evidence
Muñoz mentions documented examples of AI harms disproportionately affecting women, racialized persons, indigenous groups, and migrants in areas such as employment, health insurance, and social services.
Major Discussion Point
Impacts and risks of AI systems
Sandra Mahannan
Speech speed
124 words per minute
Speech length
517 words
Speech time
249 seconds
Regulation should focus more on AI developers and models rather than users
Explanation
Sandra Mahannan suggests that AI regulation should primarily target developers and models rather than users. She argues that this approach would be more effective in addressing issues such as data quality, privacy, security, and interoperability.
Major Discussion Point
Regulation and governance approaches for AI
Sandrine Elmi Hersi
Speech speed
111 words per minute
Speech length
843 words
Speech time
451 seconds
AI could restrict user agency and transparency in accessing online information
Explanation
Sandrine Elmi Hersi expresses concern that AI systems, particularly generative AI, could limit users’ ability to access and control the content they see online. She suggests that AI is becoming an unavoidable intermediary layer between users and Internet content.
Evidence
Hersi cites a Gartner study predicting a 25% decline in search engine traffic by 2026 due to the rise of AI chatbots.
Major Discussion Point
Impacts and risks of AI systems
Agreed with
Unknown speaker
Agreed on
AI is creating a new layer between users and Internet content
Alejandro Pisanty
Speech speed
154 words per minute
Speech length
1191 words
Speech time
461 seconds
Generative AI could lead to loss of information detail and accuracy
Explanation
Alejandro Pisanty expresses concern that generative AI, particularly large language models, might result in the loss of specific details and accuracy in information. He suggests that the compression of large amounts of text into statistical models could lead to the omission of important details.
Major Discussion Point
Impacts and risks of AI systems
Agreements
Agreement Points
AI and Internet are distinct technologies requiring different governance approaches
speakers
Vint Cerf
Luca Belli
Renata Mielli
arguments
AI and Internet are fundamentally different technologies requiring distinct governance approaches
Some core Internet values like transparency and accountability can apply to AI governance
AI and Internet governance face similar challenges around transparency, accountability and decentralization
summary
While AI and the Internet are distinct technologies, there are some shared governance challenges and principles that can be applied to both, particularly around transparency and accountability.
AI is creating a new layer between users and Internet content
speakers
Unknown speaker
Sandrine Elmi Hersi
arguments
AI is building an intermediary layer on top of Internet infrastructure
AI could restrict user agency and transparency in accessing online information
summary
AI is becoming an intermediary layer between users and Internet content, which could significantly change how users access and interact with online information.
Similar Viewpoints
The concept of openness in AI is not straightforward and does not automatically result in transparency or interoperability, unlike in Internet protocols.
speakers
Anita Gurumurthy
Vint Cerf
arguments
Openness in AI is complex and doesn’t necessarily lead to transparency or democratization
AI systems are mostly proprietary and not interoperable, unlike Internet protocols
AI regulation should focus on specific applications, risks, and developers rather than attempting to regulate the technology as a whole or targeting users.
speakers
Vint Cerf
Sandra Mahannan
arguments
AI governance should focus on regulating applications and risks, not the technology itself
Regulation should focus more on AI developers and models rather than users
Unexpected Consensus
Need for global coordination in AI governance
speakers
Yik Chan Chin
Wanda Muñoz
arguments
Global coordination is needed on AI risk categorization, liability frameworks, and training data standards
A human rights-based approach is needed for AI governance, beyond just ethics and principles
explanation
Despite coming from different perspectives (technical and human rights), both speakers emphasize the need for global coordination in AI governance, suggesting a broader consensus on the international nature of AI challenges.
Overall Assessment
Summary
The main areas of agreement include recognizing AI and the Internet as distinct technologies with some shared governance challenges, acknowledging AI’s role as a new intermediary layer in accessing online content, and the need for focused regulation on AI applications and developers.
Consensus level
There is a moderate level of consensus among the speakers on the fundamental challenges and approaches to AI governance. However, there are varying perspectives on the specific methods and focus areas for regulation. This suggests that while there is a shared understanding of the importance of AI governance, there is still a need for further discussion and refinement of specific governance strategies.
Differences
Different Viewpoints
Applicability of Internet governance principles to AI
speakers
Vint Cerf
Luca Belli
Renata Mielli
arguments
AI and Internet are fundamentally different technologies requiring distinct governance approaches
Some core Internet values like transparency and accountability can apply to AI governance
AI and Internet governance face similar challenges around transparency, accountability and decentralization
summary
While Vint Cerf emphasizes the fundamental differences between AI and the Internet, suggesting distinct governance approaches, Luca Belli and Renata Mielli argue that some core Internet governance principles can be applied to AI governance.
Openness in AI systems
speakers
Anita Gurumurthy
Vint Cerf
arguments
Openness in AI is complex and doesn’t necessarily lead to transparency or democratization
AI systems are mostly proprietary and not interoperable, unlike Internet protocols
summary
Anita Gurumurthy argues that openness in AI is complex and doesn’t automatically lead to transparency, while Vint Cerf focuses on the proprietary nature of AI systems, highlighting their lack of interoperability compared to Internet protocols.
Unexpected Differences
Perception of AI risks
speakers
Vint Cerf
Wanda Muñoz
arguments
AI governance should focus on regulating applications and risks, not the technology itself
AI systems can perpetuate and amplify existing societal biases and discrimination
explanation
While Vint Cerf, as a technology pioneer, seems to take a more neutral stance on AI risks, focusing on application-specific regulation, Wanda Muñoz unexpectedly emphasizes the systemic risks of AI in perpetuating societal biases. This difference highlights the gap between technical and human rights perspectives on AI governance.
Overall Assessment
summary
The main areas of disagreement revolve around the applicability of Internet governance principles to AI, the nature of openness in AI systems, and the appropriate focus and approach for AI regulation.
difference_level
The level of disagreement among speakers is moderate to high. While there is some consensus on the need for AI governance, there are significant differences in perspectives on how to approach it. These differences reflect the complex and multifaceted nature of AI governance, involving technical, legal, and human rights considerations. The implications of these disagreements suggest that developing a unified approach to AI governance will be challenging and may require balancing multiple perspectives and priorities.
Partial Agreements
Partial Agreements
All three speakers agree on the need for AI regulation, but they differ in their approaches. Vint Cerf suggests focusing on applications and risks, Sandra Mahannan emphasizes regulating developers and models, while Wanda Muñoz advocates for a human rights-based approach.
speakers
Vint Cerf
Sandra Mahannan
Wanda Muñoz
arguments
AI governance should focus on regulating applications and risks, not the technology itself
Regulation should focus more on AI developers and models rather than users
A human rights-based approach is needed for AI governance, beyond just ethics and principles
Similar Viewpoints
The concept of openness in AI is not straightforward and does not automatically result in transparency or interoperability, unlike in Internet protocols.
speakers
Anita Gurumurthy
Vint Cerf
arguments
Openness in AI is complex and doesn’t necessarily lead to transparency or democratization
AI systems are mostly proprietary and not interoperable, unlike Internet protocols
AI regulation should focus on specific applications, risks, and developers rather than attempting to regulate the technology as a whole or targeting users.
speakers
Vint Cerf
Sandra Mahannan
arguments
AI governance should focus on regulating applications and risks, not the technology itself
Regulation should focus more on AI developers and models rather than users
Takeaways
Key Takeaways
AI and Internet are fundamentally different technologies requiring distinct governance approaches, though some core Internet values like transparency and accountability can apply to AI governance.
Openness in AI is complex and doesn’t necessarily lead to transparency or democratization. AI systems are mostly proprietary and not interoperable, unlike Internet protocols.
AI governance should focus on regulating applications and risks, with emphasis on human rights, developer accountability, and global coordination on key issues like risk categorization and liability.
AI poses new risks around restricted user agency, perpetuation of biases, cybersecurity vulnerabilities, and potential loss of information detail and accuracy.
Resolutions and Action Items
Develop a joint report for next IGF on elements that can enable an open AI environment
Continue collaboration between the Dynamic Coalition on Core Internet Values and Dynamic Coalition on Net Neutrality on AI governance issues
Unresolved Issues
How to balance innovation and risk mitigation in AI regulation
Extent to which Internet governance principles can or should be applied to AI governance
How to ensure AI systems enhance rather than restrict access to diverse online information
Approaches for global coordination on AI governance given differing national/regional priorities
Suggested Compromises
Focus AI regulation on applications and risks rather than the underlying technology
Adopt a human rights-based approach to AI governance while still allowing for innovation
Develop compatibility mechanisms to reconcile divergent regional AI regulations while respecting diversity
Thought Provoking Comments
We now have centralized data value creation by a handful of transnational platform companies, and we could actually have had, as Benkler pointed out long ago, different form of wealth creation.
speaker
Anita Gurumurthy
reason
This comment challenges the current paradigm of data and wealth concentration in the tech industry, suggesting there were alternative paths for more distributed value creation.
impact
It shifted the discussion to consider the economic implications and power dynamics of AI and internet governance, rather than just technical aspects.
AI and Internet are not the same thing. And I think that the standardization which has made the Internet so useful may not be applicable to artificial intelligence, at least not yet.
speaker
Vint Cerf
reason
This comment importantly distinguishes AI from the internet and questions whether internet governance principles can be directly applied to AI.
impact
It prompted participants to more carefully consider which internet governance principles may or may not be applicable to AI, rather than assuming direct transferability.
From a human rights perspective, we would talk about the need for AI regulation to ensure accountability, remedy, and reparation when human, when violations of human rights results from the use of AI.
speaker
Wanda Muñoz
reason
This comment reframes the discussion of AI governance in terms of human rights, emphasizing accountability and remediation.
impact
It broadened the conversation beyond technical and economic considerations to include a human rights perspective on AI governance.
Generative AI applications are becoming a new intermediary layer between users and Internet content, increasingly unavoidable.
speaker
Sandrine Elmi Hersi
reason
This insight highlights how AI is fundamentally changing how users interact with internet content.
impact
It prompted discussion about the implications of AI as a new layer of internet infrastructure and how this might require new approaches to governance.
Overall Assessment
These key comments shaped the discussion by broadening its scope beyond technical internet governance principles to include economic, human rights, and structural considerations specific to AI. They challenged assumptions about the direct applicability of internet governance to AI and prompted a more nuanced exploration of how AI governance might need to differ. The discussion evolved from comparing internet and AI governance to considering AI’s unique challenges and impacts on internet use and society more broadly.
Follow-up Questions
How can we balance regional diversity and harmonization needs in AI governance?
speaker
Yik Chan Chin
explanation
This is important to respect different regional approaches while still establishing compatible mechanisms for global AI governance.
How can we strengthen multi-stakeholder involvement in AI governance?
speaker
Yik Chan Chin
explanation
This is crucial for ensuring diverse perspectives are included in shaping AI policies and regulations.
How can we regulate AI from the developer angle, focusing on data quality, privacy, security, and interoperability?
speaker
Sandra Mahannan
explanation
This approach could address issues at the source of AI development rather than just regulating end-user interactions.
How can we incorporate feminist and diverse perspectives into core values for AI governance?
speaker
Wanda Muñoz
explanation
This could lead to more inclusive and equitable AI systems by questioning social constructs, power dynamics, and resource distribution.
How can we ensure accountability, remedy, and reparation when human rights violations result from AI use?
speaker
Wanda Muñoz
explanation
This is critical for addressing the disproportionate harm AI can cause to marginalized groups.
How can we develop international norms specifically regarding liability and accountability in AI?
speaker
Alejandro Pisanty
explanation
This is important for establishing consistent global standards for responsible AI development and use.
How can we separate the effects of human agency and intention from the technology itself in AI governance?
speaker
Alejandro Pisanty
explanation
This distinction is crucial for appropriately addressing issues like misinformation and cybercrime in the context of AI.
How can we regulate AI in specific verticals or sectors rather than attempting to create one-size-fits-all regulations?
speaker
Alejandro Pisanty
explanation
This approach could lead to more effective and tailored regulations for different AI applications.
How can we ensure transparency in AI systems, particularly in complex models like large language models?
speaker
Luca Belli
explanation
Transparency is essential for accountability and understanding how AI systems make decisions.
How can we address the potential loss of specific details in AI-generated content?
speaker
Vint Cerf
explanation
This is important to maintain the accuracy and richness of information as AI systems become more prevalent in content creation and summarization.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event

Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online