WS #78 Intelligent machines and society: An open-ended conversation

WS #78 Intelligent machines and society: An open-ended conversation

Session at a Glance

Summary

This discussion focused on philosophical and ethical questions surrounding artificial intelligence (AI) and its impact on humanity. The speakers, Jovan Kurbalija and Sorina Teleanu from the Diplo Foundation, emphasized the need to move beyond surface-level discussions of AI ethics and biases to address more fundamental questions about human identity and agency in an AI-driven world. They raised concerns about the centralization of knowledge by large tech companies and advocated for bottom-up AI development to preserve diverse knowledge sources. The speakers questioned how AI might affect human communication and creativity, and whether humans should compete with machines for efficiency. They introduced the concept of a “right to be humanly imperfect” in contrast to AI’s pursuit of optimization. The discussion touched on the anthropomorphization of AI and the need to consider other forms of intelligence beyond human-like AI. Practical examples of AI tools for knowledge management and analysis were presented, demonstrating how AI can be used responsibly with proper attribution. Audience questions addressed topics such as AI ethics education, the potential personhood of advanced AI, and open-source approaches to AI development. The speakers concluded by proposing further philosophical discussions on AI’s impact across various cultural traditions, emphasizing the importance of examining what it means to be human in an AI era.

Keypoints

Major discussion points:

– The need to focus on immediate and practical impacts of AI rather than long-term hypotheticals

– Concerns about AI’s impact on human knowledge, communication, and identity

– The importance of maintaining human agency and imperfection in an AI-driven world

– Questions about the nature of AI intelligence compared to human intelligence

– The need for more philosophical and ethical considerations in AI governance discussions

Overall purpose:

The goal was to raise deeper philosophical questions about AI’s impact on humanity and encourage more critical thinking about AI beyond surface-level discussions of ethics and bias. The speakers aimed to challenge common AI narratives and highlight overlooked issues.

Tone:

The tone was thoughtful and somewhat skeptical of current AI hype and narratives. The speakers took a critical stance toward simplistic AI discussions while maintaining curiosity and openness to AI’s potential. There was an underlying sense of concern about AI’s societal impacts, balanced with calls for practical engagement rather than alarmism. The tone became more interactive and solution-oriented during the Q&A portion.

Speakers

– Jovan Kurbalija: Director of the Diplo Foundation and Head of Geneva Internet Platform

– Sorina Teleanu: Director of Knowledge at Diplo Foundation

– Mohammad Abdul Haque Anu: Secretary-General of Bangladesh Internet Governance Forum

– Tapani Tarvainen: Electronic Frontier Finland, Natta Foundation

– Henri-Jean Pollet: ISPA in Belgium

Additional speakers:

– Andrej Skrinjaric: Head of linguists at Diplo Foundation

Full session report

The Philosophical and Ethical Implications of AI: A Critical Examination

This discussion, featuring experts from the DiploFoundation, delved into the profound philosophical and ethical questions surrounding artificial intelligence (AI) and its impact on humanity. The speakers emphasised the need to move beyond surface-level discussions of AI ethics and biases to address more fundamental questions about human identity and agency in an AI-driven world.

Key Themes and Arguments:

1. Introduction and Framing

Jovan Kurbalija, Director of the Diplo Foundation, opened the discussion by stressing the importance of understanding basic AI concepts to engage in meaningful discussions beyond dominant narratives of bias and ethics. He argued for a more critical examination of AI’s impact on human knowledge and identity, proposing the concept of a “right to be humanly imperfect” in contrast to AI’s pursuit of optimisation.

Sorina Teleanu, Director of Knowledge at Diplo Foundation, presented a series of thought-provoking questions to frame the discussion. She questioned the anthropomorphisation of AI and the tendency to assign human attributes to machines. Teleanu raised concerns about how AI might affect human communication and creativity, encouraging consideration of other forms of intelligence beyond human-like AI.

2. Philosophical Considerations of AI

The discussion touched on various philosophical aspects of AI. Kurbalija introduced the concept of a “right to be humanly imperfect,” arguing for the preservation of human agency and imperfection in an AI-driven world. This idea resonated with other speakers, who expressed concern about the potential loss of human elements in pursuit of AI-driven efficiency.

Teleanu expanded on her concerns regarding the anthropomorphization of AI, highlighting the potential risks of attributing human characteristics to machines. She also raised important questions about the interplay between AI and neurotechnology, emphasizing the lack of privacy policies for brain data processing.

A thought-provoking perspective on the potential personhood of advanced AI was introduced. The idea that if Artificial General Intelligence (AGI) becomes indistinguishable from humans in capability, it might deserve human rights, challenged conventional notions of humanity and consciousness.

3. AI Governance and Development

The speakers agreed on the need to focus on immediate and practical impacts of AI rather than long-term hypotheticals. Kurbalija criticised ideological narratives that postpone addressing current issues in education, jobs, and daily life. He advocated for bottom-up AI development to preserve diverse knowledge sources and prevent the centralisation of knowledge by large tech companies.

Kurbalija also stressed the importance of defining accountability in AI development and deployment, arguing that legal principles regarding AI responsibility are fundamentally simple and should be applied accordingly.

Henri-Jean Pollet emphasised the importance of open-source models and data licensing for AI development. He proposed systems to validate and test AI outputs, similar to human education processes, to ensure reliability and prevent “hallucinations” in AI-generated content.

4. Human-AI Interaction and Ethics

The discussion touched on various aspects of human-AI interaction. Teleanu raised questions about the impact of AI on human-to-human communication, wondering how increased reliance on AI-generated text might change how humans interact with each other in the future. This point highlighted the need to consider the long-term sociocultural implications of AI integration.

Mohammad Abdul Haque Anu, Secretary-General of Bangladesh Internet Governance Forum, expressed concerns about AI ethics education and implementation. He questioned who should be responsible for teaching AI ethics and how to ensure adherence to ethical guidelines in AI development and deployment, particularly in the context of developing countries.

Kurbalija shared an anecdote about AI-generated speeches at conferences, illustrating the potential for AI to influence human communication in professional settings.

5. Diplo Foundation’s Approach to AI Development

Towards the end of the discussion, Kurbalija elaborated on Diplo Foundation’s approach to AI development. He explained their focus on creating AI tools that preserve and enhance human knowledge, particularly in the field of diplomacy. These tools aim to assist diplomats and policymakers by providing quick access to relevant information and analysis, while maintaining human oversight and decision-making.

Conclusion and Practical Demonstration:

The discussion concluded with a practical demonstration of AI tools developed by the Diplo Foundation. Kurbalija showcased how these tools can be used to analyze complex diplomatic texts and generate summaries, emphasizing the potential of AI to augment human capabilities in specialized fields.

The speakers emphasized the importance of continuing these philosophical discussions to examine what it means to be human in an AI era. Key unresolved issues included the effective implementation of AI ethics education, the long-term impacts of AI on human identity and interaction, and the ethical implications of AGI potentially becoming indistinguishable from humans.

This thought-provoking discussion challenged common AI narratives and highlighted overlooked issues, encouraging a more critical and philosophical approach to understanding AI’s role in shaping the future of humanity. The session ended with an invitation for continued dialogue and exploration of these complex issues.

Session Transcript

Jovan Kurbalija: No? No. Solution is very binary. Yes or no? Good. Well, welcome to our session and sorry for a slight delay for our colleagues online and those in the room. You can hear me? You made some expression, no? Waiting in and out. Okay. But we have to, because of people outside, we have to speak. No? I know, I can tell you, I was attending, I’ve been attending the last one, IGF since the first IGF. The biggest challenge, two biggest challenges are food sometimes and the second is sound. Therefore, it seems the sound works now? Yes. It’s okay. Good. Thank you for coming today. I’m Jovan Kurbalje, Director of the Diplo Foundation and Head of Geneva Internet Platform. My colleague Sorina Taleanu is Director of Knowledge at Diplo. Therefore, Sorina is working a lot on the intersection between artificial and human intelligence. Human, artificial, artificial, human. One minute, one minute and a half, two? Yeah, the screen is gone. Okay. I think you go ahead a little bit. I have permission. I think audience’s permission. Where are you coming from? I’m from Bangladesh. Bangladesh. Okay. You guys in Indian subcontinent, you invented number zero and put us in trouble. Otherwise, we were using Roman numbers and it was so easy. But you invented one and zero and digital world and all trouble starts, you know, including sound in this room. I’m always teasing my colleagues and friends from India, Bangladesh, because number zero, Shunya, was invented in the continent and they came through the Al-Khwarizmi, who invented the algorithm, first to Arab traders, and then it came to North Africa and Mr. Fibonacci. Music is coming. It’s better than me. Me. Speaking of digital challenges. Those are challenges. Oh, there was a proposal that I start singing, which is a terrible idea. I can tell you. Okay. Some people are giving up. I’m sorry. I’m sorry. Okay. I’ve been working on AI myself personally since 1992, when I did my master thesis on international law and expert systems. At that time, AI was about expert systems, rule-based AI. You basically try to cover, in this case international law, and do, if there is a breach of this, then, and the other thing. Now it’s been five years since neural networks started, and especially after the initiation of the question of chargeability, we have been focusing on AI, but in a slightly different way than other systems. We have been developing AI, and in addition we have been trying to see what are the philosophical, governance, political aspects of AI. The principle is that you don’t need to be a programmer, although we have quite a few programmers, but you have to understand basic concepts in order to discuss AI, otherwise the discussion ends up with unfortunately dominant narratives that we have here at IGF, and not only IGF, many meetings, bias, ethics, and we can generate typical AI speech. Mix of bias, mix of ethics, and what we notice is that there is a narration which is not going into the critical issues. What we basically did in this context was the following. I’m sorry. It should work. We were discussing, are we holding the globe together, or we are fighting machines and people, and most of our discussions, especially when the hype came in 2023 with ChatGPT, we were really concerned because being involved in AI, we were concerned that there was… almost surrealistic discussion. And those of you who are in this field, you know that there was a long-termism movement of effective altruism, basically send the message, you are, no? Like this? Upside, upside. Upside? Yes. Okay. So many details. That was basically long-termism movement, which was saying, don’t discuss AI. I’m simplifying. Don’t discuss AI currently. Let’s see what will happen in 20, 30 years, and let’s prevent that AI destroys humanity. I’m caricaturing a bit, but it was the narrative. You can recall the letter signed by lead scientist, stop AI, and these things. One worrying signal, which I noticed, was it became a bit of ideological narrative. And when there is ideology, then there is something fishy. I was born in socialist country, and I can detect in no time ideological narrative. And it was ideological narrative. More or less similar to communist narrative, which says, don’t ask any question now. Just enjoy, just follow the orders of the party, and in 30 or 40 years, we will be in the ideal society. If you complain, you go to gulag and you get in trouble. Now, that was a bit of initial narrative, and we said, okay, it’s a bit tricky. We have to make more serious discussion about it. Therefore, we started pushing for immediate risks of AI in the education, in jobs, in the day-to-day life, in use of AI by content moderation platform, by Google, in any walk of the life. And we were particularly concerned about one aspect, which you can consider as a mid-term aspect, which was a massive codification of human knowledge by a few platforms. Mainly platforms in the United States. States and China. Therefore, when you interact with ChargePT, you basically, as we know, you’re training the model. And that was, or Google, or Alibaba, it’s not matter where from it is coming, but the idea that knowledge is centralized and captured was very problematic. Therefore, we started developing bottom-up AI, trying to show that with relatively little resources, you can develop your own AI, and you can preserve your knowledge. And we say that it is technically feasible, financially affordable, and ethically desirable. Because knowledge, which is codified by AI, is that it defines our identity, our dignity. And that’s extremely important, that we know what is happening with our knowledge, or knowledge of the generations before us. This is more or less the context in which we have been doing that, but I’ll ask Serena now to build more on these discussions.

Sorina Teleanu: Let’s see if I can do this right. Good afternoon, everyone. Thank you for joining our session. I think what we wanted to do today is to have a bit of a dialogue around some of the more philosophical issues surrounding AI, because at least I personally feel we talk a lot about, you know, challenges and opportunities of AI, and we need to govern it, because how do we deal with the challenges, and how do we leverage opportunities? But what about the more human side of all of this? And I do have a few questions that I’m hoping we can go through quickly, and I’ll just, yeah, no, I’ll actually do it myself, if we can just maybe switch. I do like slides quite a bit. quite a lot, and at Diplo we do have quite a few very nice illustrations that I couldn’t miss an opportunity to share with you. So I’m going to do a bit of that, bear with me. And then hoping to have more questions from you as well. What you will see on the slide is mostly questions. Things that I hope we would feature more in our discussions on AI governance, on the impact of AI on economy, society, humanity and whatnot, and which I feel we miss more often than not. So starting with this, we talk a lot about large language models, right? And generative AI and chat GPT and all of these things. But what are some of these challenges in knowledge creation? How do we look at large language models and at generative AI tools? Are they a tool? Are they a shortcut? Are they our assistant, new coworker? How do we relate ourself with these tools? Are we even making conscious choices when we interact with them? Or are we exercising our agency as humans? To what extent? And yeah, the broader question, what roles do we imagine large language models playing vis-a-vis us humans as we interact with them? Are we missing the forest for the trees? We talk a lot about generative AI, but are there other forms of intelligent machines, agents, whatever you want to call them, that we might need to focus a bit more in our discussions on governance, policy, gain implications, and what does it all mean? And if we do need to do that, how do we actually get there? And then something that really, really bothers me is this way through which we do assign human attributes to AI. We do talk a lot about AI understanding, AI reasoning, and using these sorts of words that are much more adequate to use for human intelligence, right? But does AI actually reason? Does AI actually understand? And when we use this word, do we actually understand what we mean by them? And yeah, there is a bit of hype around anthropomorphizing. I spoke too much today. AI, and I’ll just give you one example. I’m not sure how many of you here might have attended the big event in Geneva, which happens on a yearly basis. This is the Global AI for Good Summit. Yes. I see at least one person who’s around Geneva, but he’s busy typing. Thank you for that. So last year, there was a lot of focus on humanoid robots. So you would walk to the conference venue and see a human-like looking robot here, another one there, another one there. But what I didn’t see was people actually questioning, okay, what does that robot actually mean? Does it understand? Does it reason? Does it think? Or is it just another way for us to hype technology in some way? You have them here as well. Yeah, we have robots here as well. And then a bit more on the interplay between AI and other technologies that tend to also join a bit the hype. Another example from exactly same conference. Last year, the focus was on neurotechnologies and how the interplay between AI and neurotechnology might be impacting the way we relate to these technologies in the future. There were many companies who are coming at the summit and presenting their neurotechnology devices, applications, and what not, and we had this random curiosity, okay, let’s see what kind of privacy policies these companies have when it comes to their processing of brain data. You know, if you use a neurodevice, there is some processing of brain data. What do you think we found out? Any guess?

Jovan Kurbalija: Out of how many, eight or nine?

Sorina Teleanu: I think eight or nine. Those are the companies that we looked at. One had a line in the privacy policy saying, well, we might be processing your brain data, but because you agreed to use the service, you also agreed to us processing the data, and then all others, their privacy policies were mainly related to cookies, how you interact with the website, but nothing about the technology itself. And then the question is, if you as an international organization invite these organizations, companies, to speak about how amazing neurotechnology can actually be at the end of the day, and I don’t know, help solve whatever problems, shouldn’t you also be a bit more careful about how they also deal with the human implications of this with human rights and what not? I think sometimes we talk, but we don’t also walk the talk in the policy space, and I’m hoping we will see a bit more of that going forward. More questions. When words lose their meaning, how many of you here use tools like ChatGPT? At least, okay, I’m seeing a lot of hands. I’m a bit worried, to be honest. Giovanna and I are also teaching at the College of Europe, and you know how it is. When you have an assignment which is write an essay, you just go to ChatGPT, you put it there, and you get your essay. We’re also seeing this in the policy space quite a lot. There was a funny anecdote from the head of an international organization in Geneva. Would you like to tell that shortly, about going to a conference and hearing the same?

Jovan Kurbalija: Yes, they went for a conference, and they were. hearing all same speeches by all opening statements and that was it. And we had one organization that created their strategy on 120 pages and we initially, it’s important organization, and we said let us read it and then somebody even didn’t dare to remove ChatGPT references and then we said oh my god what’s going on. That was funny anecdote with eight speeches or nine speeches basically. We then analyzed them and we found the patterns that were basically generated by. Even they don’t make an effort to go to Gemini or or other platforms, but everything was generated by ChargePT, okay. Okay, just for colleagues online, but there was a comment which is good comment and we often discuss that. We always ask how AI should be perfect, but then in the same time we are who we are and the speeches as you know even written by humans are not that exciting and then it was it was good good point. But what worried us was this huge document on AI strategy and we were thinking it’s many countries will read it as a strategy for AI for that organization. First, can they read hundred ten or twenty pages? Second, and second is it really expression of the policy interest of that organization? It’s not. It’s on very common sense level and that’s that’s it. Serena.

Sorina Teleanu: Thank you Jovan and speaking more of concerns, I think the first question right there is the one expressing the best what we have been discussing for quite a while at least at Diplo, but I don’t see it so much in the broader space. If we rely so much on AI tools to write our text and communicate in our emails or whatever else, right now it’s easy to kind of easy to spot what is AI generated text and what at least has some sort of human intervention. But if we end up relying so much on chat GPT and like tools, will we still sound like humans five, ten, fifteen, twenty years from now? And also what does it have what does yeah it happen with all this kind of how do I call it self-perpetuation of AI? If AI comes up with new text based on data already available now but five years from now all of the new data will still be AI generated. What does it also mean for broader issues of human knowledge and also for how us as humans actually relate to each other at the end of the day and how we communicate to each other. This is one of my favorite books. I’m not sure if someone in the room has actually read it. I think we also have this kind of obsession as humans to try to develop AI which is really like us. We want to generate you. No, general artificial intelligence to be as good as humans as every single task because I don’t know we want that to happen But what about other forms of intelligence out there? Can we develop intelligent machines that act more like octopuses? Which we have discovered recently that are quite smart right and intelligent more like fungi more like forests What about other forms of intelligence around us that we might want to borrow a bit from as we develop? Whatever we mean by intelligent machines. We tend to be so focused on us humans. We are at the center of the earth We have the best we know the best but maybe it’s not exactly that also as we look into developing technology Yes, and we’re having more trouble with Technology because why not and I’ll end with a few more questions. This is probably the overarching one What does it mean to still be human in an increasingly AI era? This is more on the interplay. I was mentioning earlier between Neurotechnology more invasive technologies AI and your one can cover that a bit later as well another example of this interplay What we have been trying to kind of advocate for in the policy space in Geneva, is this right to be humanly imperfect? Jovan, would you like to reflect on this a bit?

Jovan Kurbalija: Well, the idea is counterfactual and when I go to Human Rights Council and the human rights community they look at me As if I am from the other planet, but I was I was arguing for quite some time Three to five years and I even proposed the workshop at the at the IGF but probably the the whatever mug they dismissed that as I was arguing I’ve been arguing for the Idea that we have a right to be imperfect But our efficiency civilization made Centered about optimizing efficiency is basically making it unthinkable That you have a right to be lazy You have a right to be lazy, you have a right to make mistakes, you have a right to ask, you have a right to this. But if you really consider carefully, humanity made its breakthroughs when we were lazy. In ancient Greece, these people had plenty of time to walk through the parks and to think about philosophy. Or in Britain, all this tennis, football was invented in the British time, obviously. Some other people were working for the elite, that’s another story. But they were lazy and they were inventing things. Therefore, my argument is that we have to fight to the right to be imperfect, to refuse to remain natural, not to be hacked biologically, the right to disconnect, the right to be anonymous, and the right to be employed over machines. I have my bet that in five years’ time, I will be already retired, but some of you are younger, we will have at least one workshop at IGF of asking, do we have a right to be imperfect? And then I can offer it as a bet that we do it. But it’s a very serious question going beyond, let’s say, a catchy title. It’s going into what Sorina said, the critical question, what does it mean to be human today? And what will be our humanity in the relation to machines in the coming years?

Sorina Teleanu: Thank you. And it’s not so much about talking about robots coming and taking over and this Terminator business that used to be in focus for quite a while. It’s more about this very human-to-human interaction and how AI comes into play in this human-to-human interaction. We’ll end with a few more questions, and then we’re hoping you will be adding more questions at the bottom of our slide. So trying to wrap up, what do we actually want from AI? There is, yeah, maybe I shouldn’t. There is a quote from a company, maybe I shouldn’t call the company at least, developing artificial… It’s a prominent company. Yeah. It’s developing or trying to develop artificial general intelligence, again, the type of AI that would be at least as good as human at doing everything and anything. So the quote, I’m going to paraphrase it probably, goes a bit like this. Our aim as a company is to develop artificial general intelligence to figure out how to make it safe and then to figure out its benefits. So when I saw that statement, I was thinking, okay, but shouldn’t it be a bit like the other way around? Like sort of… out what are the benefits of AGI, then see how to make it safe, and then actually develop it? Isn’t it a bit of a wrong narrative there? And I think in our policy discussions at the IGF and elsewhere, we should be questioning these companies a bit more. I feel they’re just going around the place saying, hey, AI is going to solve all of our problems and we’re going to sleep happily every night because AI will do the work for us. Okay, but have we actually thought carefully about this? And again, it’s not about Terminator and these kind of robots killing humanity, but what does it mean to still be human? When you said sleep, we are sleepwalking. We kind of are, exactly. Because again, we don’t see many of these questions, unfortunately, in the policy debates. And we’re coming from Geneva, where every other day, at least, you do have a discussion on AI governance. You can confirm. Thank you for nodding. How many of these questions do you actually see in those debates? Yeah. Yeah. So I’m hoping we’ll be seeing more of them. Again, how do we interact with AI? To what extent are we even aware of these interactions? And how much of these interactions involve informed choices? How about our human agency? As I was saying, is AI having an impact on how we interact and how we relate with each other as human beings? Is AI making choices for us? Should it be making choices for us? Again, the notion of human agencies. I wanted to go to Jovan’s point about the right to be humanly imperfect and this focus on efficiency. In a world driven so much by economic growth and GDP and this way of measuring progress, can humans compete with machines? Should humans compete with machines? Should we just do what we’re good at and leave machines do repetitive tasks? And finally, is there a line to be drawn? Can we still draw this line at the point? Is it too late? Can we be asking more questions? Over to you, I would say. And I’m hoping you will have some reflections on some of these questions and ideally more questions because I think questions are important and we should be asking more of them.

Jovan Kurbalija: Internally, I’m enthusiastic on the AI side because I have the geeky approach. and Sorina is not enthusiastic. And for those of you who consulted Sorina’s book, I would probably repeat myself, she wrote a book on the Global Digital Compact. When it was adopted on 27th of September at the UN, Sorina has been following Compact for last 18 months. Her slides are shared by many governments. And I said, Sorina, let’s use AI and convert your slides into the book. And she said, OK, you know how Sorina is kind. She said, OK, maybe. And the next day I see Sorina typing. I said, Sorina, come on, let’s put the slides and we convert into the book and you have a book. Sorina wrote the book herself in 47 days on 200 pages. Here is the book. And I lost my battle that AI can help us in writing the… This is a book which was written in a very solid analysis in 47 days. Therefore, we have internal battle which is healthy debate with me being more optimistic and Sorina being more careful and pessimistic. But we do reporting from IGF through the website, I don’t know, with the use of AI. And we are going to ask at the end of the IGF, whole analysis of IGF, how many, in our view, realistic question about AI were asked over the five days. And how many are discussion on ethics, biases, and these things. Mind you, that’s important discussion. But we are very often not seeing a forest for a tree. Critical issues about the future of the knowledge. Therefore, we start with the questions or comments. Introduce yourself, please.

Mohammad Abdul Haque Anu: My name is Onu. I’m a Secretary-General of Bangladesh Internet Governance Forum. My question is, who institute teaching, giving teaching the AI ethics? Everybody knowing, everybody saying that we should follow the AI ethics. Who delivered a teaching process that we should follow this curriculum is that this is the AI it is My question.

Jovan Kurbalija: Okay. I’m first bit bit cautious about their ethics. I think there is a human ethics and I’m cautious about biases. For example, I’m a fool of biases. You are full of biases. We are biases historically culturally therefore there is a very important discussion how How to organize that and how to make a common-sense Curriculum not not too hyped and I’m afraid that a lot of energy in AI debates are is going to the let’s say ethics You have now close more than 1,200 guideline standards on AI and ethics that is Losing any context now, I would be careful on that. I wouldn’t focus on AI and ethics I will focus on how AI functions and what are implications for society direct implication We run AI apprenticeship people create their chatbot. They interact We tell them this makes sense. This doesn’t make sense. Is it going to offend somebody? What are the biases that could be tolerated? What are the biases which are illegal or can harm people therefore you put you put? General discussion on ethics. I love philosophy, but philosophy should be practical You know, you put it to the very practical level. This is missing and that That should be in our view developed through the apprenticeship programs Yep

Mohammad Abdul Haque Anu: Absolutely, I agree with you already we are suffering is the misinformation disinformation Misinformation malinformation here throw by social media so many channel given the There is no ethics. There is nobody follow the ethics Now we are facing upcoming day is the AI ethics ethics. How we are suffer and how we are manage this kind of thing. Nobody maintain the rules, nobody maintain the ethics, but everybody saying that we should follow the ethics.

Jovan Kurbalija: Let me say it’s very simple. Diplo has AI systems. I’m director of Diplo. If you go to our website and you ask the question and you feel insulted, you won’t, but if you feel insulted, I’m responsible. I mean, law is very simple since Hammurabi who invented the first legal rules. It’s very simple. You start the business or non-profit, you start something, nobody forced you to start it, you use it, you’re responsible if damage is created. I’m sorry to say, but you have some laws on AI on close to 200 pages and principles are legal. I’m lawyer by training. Legal principles are very simple. Diplo does the chatbot. Okay. Has anyone forced you to do it? No. Is your chatbot harming somebody? So far no, but if it harms, I’m responsible and that’s the end of the story. There are legal rules that apply to it and frankly speaking, I’ve been in this field for many years in AI, but I’m always amazed when I go to these AI events and I think then they go into algorithm models that machine will take over and then it becomes, but these issues are rather in its core rather simple and the law is codification of ethics. Don’t kill somebody. Don’t insult somebody. Don’t steal somebody’s property and that’s simple. We are trying to simplify discussion, but not oversimplify, because we found that a lot of energy, including, I’m sorry, in this space is focused on issues which are nice to talk about, but which are not even sometimes good philosophy. If we discuss good philosophy, it’s great. But sometimes it’s basically repeating notions on ethics, bias, and that’s a bit of my concern on it. Therefore, practical apprenticeship, responsibility, legal responsibility, focus on issues, concrete issues, train people to understand it. That would be my advice. We have a question over there. Let me just bring, you were last year as well, you know, you’re our followers.

Tapani Tarvainen: Okay, I’m Tapani Tarvainen from Electronic Frontier Finland, Natta Foundation. Now I want to jump straight to the deep end of the philosophical questions here. Assuming that AGI becomes actually possible, which I’m not sure of, and it’s as good as people in every way, so why isn’t it human at that point? You could argue that those machines, if they are our peers in every way, they are our offspring, basically, and they should have human rights at that point as well. So how do I know that you’re not a machine, and why should I care in the end? If, you know, because I don’t know what goes inside your brain. We don’t understand how we think for that matter. Now, I don’t think we are LLMs, you know, these large language models, nor any other type of present artificial intelligence. There are other kinds of, you know, AlphaGo or Stockfish, as somebody might know here. They are not LLMs, but they are still artificial intelligence, but perhaps, you know, we can come up with true AGI. And also I have to point out historical point here that neural networks have been studied since 1980s by Teova Kohonen, as you presumably know. So it’s not a new thing, but only now because the data available has made it more practical. But the philosophical point, is there a line to be drawn? Is there a line to be drawn reading from the screen? If and by whom? And at what point it becomes actually acceptable to treat robots as if they were human? Or does it I? Suggest that as soon as they start to behave enough human like that. We can’t take it tell it apart from talking to them Pinching them whatever Then they will be fighting for their human rights

Jovan Kurbalija: Fantastic question let me let me how I see it and then we’ll hear from Serena if you See it as basically we are powerful and we decided to run this world the not Chimpanzee not fungi not octopus and we say okay, we’re in charge It’s a human centered world all of you had there’s a natural or robotic aspect you are going to follow our rules I’m now simply simplifying probably over simplifying But your your your points are and I will make a few links if you We can go into there is a Ibn Sena Arab philosopher who did the flying flying Man story where basically he discuss what you said are we real? He basically argued that flying man who is floating maybe in this room Is it’s philosophical exercise But removed from the body from the sensory experiences and is there a free will is there a free consciousness of the flying man? it’s for me still one of the most fascinating lines of the Virtuality of thinking and to the point Many you you raise many issues, but that’s one the other issue which you raised is so what? It is sort of echoing into into so what why we should be worried? My colleague Andre who is our head of our linguists raise your hand Andre. Yeah He raised recently one interesting aspect which You can narrate later on this on this there was discussion this type of discussion in Abu Dhabi at one event and He said why we are afraid of AI Because It is the first machine which is not doing exactly what we expect. When you press the light, you got the light. When you press the accelerator in your car, you accelerate. When you write the text in the word processor, you have the text which you write. Always the reaction is basically expected, predictable. For the first time, we have a machine which is hallucinating and which looks like us. Therefore, we started being worried about AI because it is not anymore this precise machine like everything else before invented by technology. Suddenly, it’s like us. It hallucinates. It has the right to be imperfect. Imperfect. It has the right to be imperfect. Therefore, it was an interesting insight by Andrej and Hime later on. We have a question from you. Introduce yourself.

Henri-Jean Pollet: I’m Henry Pollet from ISPA in Belgium. I wanted to jump on what you said there because then we should submit AI to exams and graduation by having a smart question like the human to select them and bring them to something because… Sorry, I don’t know if it works. Okay. Now, I’m saying that because it can generate… There’s a lot of topics right now, but just jumping on that answer. If you don’t know what to generate, then you must know how can we have a system to validate what the AI is doing, what this engine is doing. You need to kind of like a human goes to a graduation process and at the end of it, through its education, you graduate. The AI should also be kind of testable and make sure that what you say is not hallucination. That’s what I wanted to say about that. And the second, if I may ask two other points. How would you see the AI… high because the information is coming from so many sources. So is there a way that you can conceive an AI system where you would have like an open, I wouldn’t say open interface to the data that is populating these processing engines of AI, but in a common way so that they can integrate because otherwise you have a kind of single model according to a topic map of some kind of people but not integrated. It would be great to have like a common efforts so everybody could work towards an AI of value because this kind, I wouldn’t say standard, but the model could be integrated afterwards. Today it’s like a big collection, like a big vacuum cleaner that is sucking all the possible data to different purposes and sometimes maybe a commercial purposes, more than intelligent purposes, but it is something that we should tackle. And the second aspect, the last comment I would say is what about using the model like open source because who’s the data from? Is it the one that produce it or is the one that collects it? And if the open source of software could give some interesting aspect because you have different kind of license. You have license that say my data is public domain, do whatever you want with it. Some others would keep it. No, this is a proprietary domain. If I give it to your model, you need to give me something in exchange or I give you free, but then you give it free. That’s the difference between all this license model in open source. Could that be an interesting part considering the data like software in a way? That was my question. Thank you.

Jovan Kurbalija: An excellent question. And while we are preparing the answer, it will be very concrete and answer with practical tools, not just narration, but how it can be done. Sorina, you can reflect a bit on the evaluation, validation of knowledge. I’m not of the, this.

Sorina Teleanu: Unfortunately, I kind of missed your question, so I’m not sure I’m in a good position to try to answer. So we’ll probably have to wait for Jovan to show us the practical things.

Jovan Kurbalija: Okay. Well, I was fast in finding the answer to your questions, which are critical questions. Yes, knowledge can be developed bottom up by us, and here is a very concrete example. We are basically, we use, all what we use is open source, from language models to all applications that we use. And we are now doing transcribing or reporting from IGF. Our session will be transcribed. And here is the key question. Let’s say yesterday was a session benefit, you can click on any session, and you have a session at glance, you have a report, full transcript. And what is very critical, you have also what were speakers and their knowledge and their input into the AI model. You have a speech length, you have knowledge graph for the overall session. You have in-depth analysis, did people agree? Today we had many agreements, not disagreement, but different view between differences, partial agreement and other elements. You don’t have it for the whole day. Therefore, what was discussed during the whole day, here is a knowledge graph of discussion. And it’s very busy. You can find your discussion and how you relate to all other discussion. Now the key question is, okay, we created then AI chatbot, and we ask, we can ask the question, let me ask the question. be human right to be imperfect? Probably nobody referred to it, but let’s see, maybe. Yes, five minutes. We basically processed through all transcripts and we get the answer. Common knowledge, that’s, but then what is the key? AI identifies on what basis the answer was given. Therefore, it makes a reference to what you said to, if you spoke at the session and you said this, that should be referred to you, not into some abstract chatbot where you have, let me ask some other question, which is more common. What should be AI governance? That’s everybody nowadays talking about this. I’m sure there will be some answer. Therefore, it is kind of just to explain. Therefore, we transcribe the public sessions into transcript, analyzed by AI system, and we had here some answer. Now, answer is based on sources where you can have exactly who said it, at what time, what were the pointers, and we always, this is our principle, whenever we generate any answer, it must be attributed to somebody or something for the sake of precision, for the sake of fairness. If it is book written by somebody, an article, this is the major problem. And your question pointed in this direction, can we do it? Yes, it’s doable. And big companies, OpenAI and Google and others can do it. Why they don’t do it? This is another question, but they cannot give us explanation that it is technically impossible. It is technically possible. And that’s basically what is, I would say, critical for the future of serious discussion about AI. Or here, let me, you have video summaries, you can go to our website, everything is public. And then here you have the answer with the details based on the logic and knowledge which was delivered yesterday and today here in this space. And here is, you find the answer and then here are the sources. What was the specific session, Wojci Aida, what AI decided to rely on this, Abdulla Alshmarani, Ala Zahir, Rupeng. Therefore the answer was based, AI decided to choose their paragraphs and that’s critical. And then when you click you go to the website, to the page with the transcript from that session and you can go through that session. Therefore your point, it is technically possible, financially affordable and ethically desirable. And I try to answer with concrete example, what we are experiencing today in the building. We got indication that we have five minutes. What I suggest, this issue started with the questions, there are many questions which Sorina listed, which we have to discuss. You can drop us an email, we are creating a small group of philosophers. We are proposing one idea to Norwegian host of IGF, to have a Sophie’s World for AI, those of you who read Sophie’s World, and to have a session with author of the Sophie’s World in Oslo and to ask him how he would be considering writing Sophie’s World today. But in the process we plan to engage discussion on Arab philosophy, Asian philosophy, Confucius, Buddhism, Christian, Ubuntu, African philosophy. Those are traditions that have to feed into this session. serious discussion. Ethics is important part, but I will say knowledge, education, what does it mean to be human? What does it mean to interact with each other? Is it going to be refined or remain the same? Those are critical issues in addition to ethics, bias and other things. Now, without risking of becoming persona non gratis with our generous host, I would like to thank you for patience, excellent questions. And we will leave the cards, you can, if you’re interested in this possible Sofie’s World, I don’t know if Norwegians are going to click on that. I would like to invite you with or without Sofie’s World that we continue this discussion and see how far we can move and provide more in-depth answers to your excellent questions. Thank you very much. Thank you. Thanks. Thanks for having me. Nice to meet you Eric.

J

Jovan Kurbalija

Speech speed

136 words per minute

Speech length

3755 words

Speech time

1653 seconds

AI’s impact on human knowledge and identity

Explanation

Kurbalija expresses concern about the centralization and capture of knowledge by a few platforms through AI. He emphasizes the importance of preserving knowledge and developing bottom-up AI to maintain control over our knowledge and identity.

Evidence

Diplo Foundation’s efforts to develop bottom-up AI and show that it is technically feasible, financially affordable, and ethically desirable.

Major Discussion Point

Philosophical and ethical implications of AI

Agreed with

Sorina Teleanu

Agreed on

Need for critical examination of AI’s impact on society

Focus on immediate risks of AI rather than long-term hypotheticals

Explanation

Kurbalija argues for focusing on the immediate risks of AI in education, jobs, and day-to-day life, rather than long-term hypothetical scenarios. He criticizes the ideological narrative surrounding AI that postpones addressing current issues.

Evidence

Comparison to communist narrative that promised an ideal society in the future while ignoring present concerns.

Major Discussion Point

AI governance and development

Right to be humanly imperfect in an AI-driven world

Explanation

Kurbalija proposes the idea of a right to be imperfect in an increasingly efficiency-driven world. He argues that human breakthroughs often come from periods of leisure and imperfection, which may be threatened by AI-driven optimization.

Evidence

Historical examples of inventions and philosophical developments during periods of leisure in ancient Greece and Britain.

Major Discussion Point

Human-AI interaction

Agreed with

Sorina Teleanu

Agreed on

Importance of preserving human agency and identity in AI development

Potential for bottom-up AI development to preserve human knowledge

Explanation

Kurbalija demonstrates the possibility of developing AI systems that preserve and attribute knowledge to its original sources. He argues that this approach is technically possible and ethically desirable for maintaining the integrity of human knowledge.

Evidence

Demonstration of Diplo Foundation’s AI system that transcribes IGF sessions, analyzes content, and provides attributed answers based on the discussions.

Major Discussion Point

Human-AI interaction

S

Sorina Teleanu

Speech speed

180 words per minute

Speech length

2129 words

Speech time

705 seconds

Need to question anthropomorphizing AI and assigning human attributes

Explanation

Teleanu expresses concern about the tendency to assign human attributes to AI, such as understanding and reasoning. She argues that this anthropomorphization may lead to misunderstandings about AI’s capabilities and nature.

Evidence

Example of humanoid robots at the Global AI for Good Summit and the lack of critical questioning about their actual capabilities.

Major Discussion Point

Philosophical and ethical implications of AI

Agreed with

Jovan Kurbalija

Agreed on

Need for critical examination of AI’s impact on society

Lack of privacy policies for brain data processing in neurotechnology

Explanation

Teleanu highlights the lack of adequate privacy policies for brain data processing in neurotechnology companies. She argues that this raises concerns about human rights and data protection in the context of AI and neurotechnology integration.

Evidence

Survey of privacy policies of neurotechnology companies presenting at the Global AI for Good Summit, finding only one with a relevant mention of brain data processing.

Major Discussion Point

AI governance and development

Impact of AI on human-to-human communication

Explanation

Teleanu raises concerns about the potential impact of AI on human communication and relationships. She questions whether reliance on AI-generated text will change how humans sound and interact with each other in the future.

Major Discussion Point

Human-AI interaction

Agreed with

Jovan Kurbalija

Agreed on

Importance of preserving human agency and identity in AI development

M

Mohammad Abdul Haque Anu

Speech speed

115 words per minute

Speech length

131 words

Speech time

68 seconds

Concerns about AI ethics education and implementation

Explanation

Anu questions who is responsible for teaching AI ethics and how it should be implemented. He expresses concern about the lack of adherence to ethical guidelines in current technologies and worries about similar issues arising with AI.

Evidence

Reference to existing problems with misinformation and disinformation in social media as an example of ethical challenges in technology.

Major Discussion Point

Human-AI interaction

T

Tapani Tarvainen

Speech speed

179 words per minute

Speech length

304 words

Speech time

101 seconds

Possibility of AGI becoming indistinguishable from humans

Explanation

Tarvainen raises philosophical questions about the nature of AGI if it becomes as capable as humans in every way. He suggests that at some point, highly advanced AI might be considered human and deserve human rights.

Major Discussion Point

Philosophical and ethical implications of AI

H

Henri-Jean Pollet

Speech speed

170 words per minute

Speech length

457 words

Speech time

161 seconds

Importance of validating AI outputs through testing

Explanation

Pollet suggests that AI systems should undergo testing and validation processes similar to human education and graduation. He argues that this would help ensure the reliability and accuracy of AI-generated outputs.

Major Discussion Point

AI governance and development

Importance of open source models and data licensing for AI development

Explanation

Pollet proposes the use of open source models and various data licensing approaches for AI development. He suggests that this could help address issues of data ownership and promote more collaborative and transparent AI development.

Evidence

Reference to different types of open source software licenses as a potential model for AI data licensing.

Major Discussion Point

AI governance and development

Agreements

Agreement Points

Need for critical examination of AI’s impact on society

Jovan Kurbalija

Sorina Teleanu

AI’s impact on human knowledge and identity

Need to question anthropomorphizing AI and assigning human attributes

Both speakers emphasize the importance of critically examining AI’s impact on human knowledge, identity, and societal interactions, rather than accepting prevalent narratives.

Importance of preserving human agency and identity in AI development

Jovan Kurbalija

Sorina Teleanu

Right to be humanly imperfect in an AI-driven world

Impact of AI on human-to-human communication

The speakers agree on the need to preserve human agency, imperfection, and authentic communication in the face of increasing AI integration in society.

Similar Viewpoints

Both argue for more transparent and collaborative approaches to AI development that preserve human knowledge and promote open access.

Jovan Kurbalija

Henri-Jean Pollet

Potential for bottom-up AI development to preserve human knowledge

Importance of open source models and data licensing for AI development

Unexpected Consensus

Concern about oversimplification of AI ethics discussions

Jovan Kurbalija

Mohammad Abdul Haque Anu

Focus on immediate risks of AI rather than long-term hypotheticals

Concerns about AI ethics education and implementation

Despite coming from different perspectives, both speakers express concern about the current state of AI ethics discussions, suggesting a need for more practical and immediate approaches.

Overall Assessment

Summary

The main areas of agreement revolve around the need for critical examination of AI’s societal impact, preservation of human agency and knowledge, and more practical approaches to AI ethics and development.

Consensus level

Moderate consensus on the importance of addressing immediate AI challenges and preserving human elements in AI development. This implies a shared recognition of the need for more nuanced and practical approaches to AI governance and ethics.

Differences

Different Viewpoints

Approach to AI ethics and governance

Jovan Kurbalija

Mohammad Abdul Haque Anu

Kurbalija expresses concern about the centralization and capture of knowledge by a few platforms through AI. He emphasizes the importance of preserving knowledge and developing bottom-up AI to maintain control over our knowledge and identity.

Anu questions who is responsible for teaching AI ethics and how it should be implemented. He expresses concern about the lack of adherence to ethical guidelines in current technologies and worries about similar issues arising with AI.

While both speakers are concerned about AI ethics, Kurbalija focuses on bottom-up AI development and knowledge preservation, while Anu emphasizes the need for clear ethical guidelines and implementation.

Unexpected Differences

Anthropomorphization of AI

Sorina Teleanu

Tapani Tarvainen

Teleanu expresses concern about the tendency to assign human attributes to AI, such as understanding and reasoning. She argues that this anthropomorphization may lead to misunderstandings about AI’s capabilities and nature.

Tarvainen raises philosophical questions about the nature of AGI if it becomes as capable as humans in every way. He suggests that at some point, highly advanced AI might be considered human and deserve human rights.

While Teleanu cautions against anthropomorphizing AI, Tarvainen unexpectedly considers the possibility of highly advanced AI being indistinguishable from humans and potentially deserving human rights. This difference highlights the complexity of defining the boundaries between human and artificial intelligence.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI ethics and governance, the focus on immediate vs. long-term AI impacts, and the philosophical implications of advanced AI.

difference_level

The level of disagreement among the speakers is moderate. While there are differing perspectives on specific aspects of AI development and its implications, there is a general consensus on the importance of addressing AI’s impact on society. These differences highlight the complexity of AI governance and the need for multifaceted approaches to address various concerns.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of addressing immediate AI impacts, but Kurbalija focuses more on practical risks in various sectors, while Teleanu emphasizes the potential changes in human communication and relationships.

Jovan Kurbalija

Sorina Teleanu

Kurbalija argues for focusing on the immediate risks of AI in education, jobs, and day-to-day life, rather than long-term hypothetical scenarios. He criticizes the ideological narrative surrounding AI that postpones addressing current issues.

Teleanu raises concerns about the potential impact of AI on human communication and relationships. She questions whether reliance on AI-generated text will change how humans sound and interact with each other in the future.

Similar Viewpoints

Both argue for more transparent and collaborative approaches to AI development that preserve human knowledge and promote open access.

Jovan Kurbalija

Henri-Jean Pollet

Potential for bottom-up AI development to preserve human knowledge

Importance of open source models and data licensing for AI development

Takeaways

Key Takeaways

There is a need for more philosophical and ethical discussions around AI beyond just bias and ethics

Current AI development and governance discussions often lack critical questioning of long-term implications

Bottom-up AI development is technically feasible and ethically desirable to preserve human knowledge

AI’s impact on human-to-human interaction and communication needs more consideration

There are concerns about anthropomorphizing AI and assigning human attributes to it inappropriately

Legal and ethical responsibility for AI systems needs to be clearly defined

Resolutions and Action Items

Proposal to create a ‘Sophie’s World for AI’ session at the next IGF in Norway to discuss AI from various philosophical traditions

Plan to continue the discussion on philosophical implications of AI among interested participants

Unresolved Issues

How to effectively implement AI ethics education and guidelines

The potential long-term impacts of AI on human identity and interaction

How to balance AI efficiency with preserving human imperfection and agency

The ethical implications of AGI potentially becoming indistinguishable from humans

How to ensure transparency and attribution in AI-generated content and knowledge

Suggested Compromises

Developing AI through apprenticeship programs to balance efficiency with human oversight

Using open source models and varied data licensing to allow for more inclusive AI development

Implementing systems to validate and test AI outputs similar to human education processes

Thought Provoking Comments

We have been developing AI, and in addition we have been trying to see what are the philosophical, governance, political aspects of AI. The principle is that you don’t need to be a programmer, although we have quite a few programmers, but you have to understand basic concepts in order to discuss AI, otherwise the discussion ends up with unfortunately dominant narratives that we have here at IGF, and not only IGF, many meetings, bias, ethics, and we can generate typical AI speech.

speaker

Jovan Kurbalija

reason

This comment challenges the typical AI discourse and emphasizes the need to understand fundamental concepts beyond just technical aspects.

impact

It set the tone for a more philosophical and governance-focused discussion, moving away from purely technical or ethical considerations.

What roles do we imagine large language models playing vis-a-vis us humans as we interact with them? Are we missing the forest for the trees? We talk a lot about generative AI, but are there other forms of intelligent machines, agents, whatever you want to call them, that we might need to focus a bit more in our discussions on governance, policy, gain implications, and what does it all mean?

speaker

Sorina Teleanu

reason

This series of questions broadens the scope of the AI discussion beyond just large language models and encourages thinking about diverse forms of AI.

impact

It prompted participants to consider a wider range of AI applications and their implications, moving the conversation beyond popular topics like ChatGPT.

Therefore, my argument is that we have to fight to the right to be imperfect, to refuse to remain natural, not to be hacked biologically, the right to disconnect, the right to be anonymous, and the right to be employed over machines.

speaker

Jovan Kurbalija

reason

This comment introduces a novel concept of human rights in the AI era, challenging the focus on efficiency and perfection.

impact

It sparked a new line of thinking about human values and rights in an AI-dominated world, shifting the discussion towards more philosophical considerations.

Assuming that AGI becomes actually possible, which I’m not sure of, and it’s as good as people in every way, so why isn’t it human at that point? You could argue that those machines, if they are our peers in every way, they are our offspring, basically, and they should have human rights at that point as well.

speaker

Tapani Tarvainen

reason

This comment raises profound questions about the nature of intelligence, consciousness, and rights in relation to AGI.

impact

It deepened the philosophical aspect of the discussion, prompting consideration of the ethical and legal implications of highly advanced AI.

How would you see the AI… high because the information is coming from so many sources. So is there a way that you can conceive an AI system where you would have like an open, I wouldn’t say open interface to the data that is populating these processing engines of AI, but in a common way so that they can integrate because otherwise you have a kind of single model according to a topic map of some kind of people but not integrated.

speaker

Henri-Jean Pollet

reason

This comment raises important questions about AI data sources, integration, and the potential for more open and collaborative AI development.

impact

It shifted the discussion towards practical considerations of AI development and data management, prompting thoughts on open-source approaches to AI.

Overall Assessment

These key comments shaped the discussion by moving it beyond surface-level considerations of AI ethics and bias, delving into deeper philosophical questions about the nature of intelligence, human rights in an AI era, and the practical challenges of AI development and governance. The conversation evolved from initial critiques of typical AI narratives to exploring novel concepts like the right to be imperfect and the potential personhood of advanced AI. This progression led to a rich, multifaceted dialogue that touched on technical, ethical, philosophical, and practical aspects of AI’s impact on society and human identity.

Follow-up Questions

How can we develop bottom-up AI to preserve knowledge and prevent centralization?

speaker

Jovan Kurbalija

explanation

This is important to address concerns about knowledge being centralized and captured by a few platforms, potentially impacting identity and dignity.

How will our communication and relationship with each other as humans change due to increased reliance on AI-generated text?

speaker

Sorina Teleanu

explanation

This explores the long-term implications of AI on human-to-human interactions and communication styles.

Can we develop intelligent machines that mimic other forms of intelligence found in nature, like octopuses or fungi?

speaker

Sorina Teleanu

explanation

This questions our human-centric approach to AI and suggests exploring alternative models of intelligence.

What does it mean to still be human in an increasingly AI-driven era?

speaker

Sorina Teleanu

explanation

This philosophical question is crucial for understanding the evolving relationship between humans and AI.

How can we implement a ‘right to be humanly imperfect’ in an efficiency-driven world?

speaker

Jovan Kurbalija

explanation

This explores the tension between human nature and the push for AI-driven efficiency.

Who should be responsible for teaching AI ethics?

speaker

Mohammad Abdul Haque Anu

explanation

This addresses the need for clear guidelines and responsibility in AI development and deployment.

At what point should we consider highly advanced AI as deserving of human rights?

speaker

Tapani Tarvainen

explanation

This philosophical question challenges our definitions of humanity and rights in relation to AI.

How can we develop a system to validate and test AI outputs to ensure they are not hallucinations?

speaker

Henri-Jean Pollet

explanation

This addresses the need for quality control and reliability in AI-generated content.

How can we create a common, integrated data model for AI that allows for collaboration while respecting data ownership?

speaker

Henri-Jean Pollet

explanation

This explores the potential for standardization and open collaboration in AI development.

Could open-source software licensing models be applied to AI training data to address issues of ownership and usage rights?

speaker

Henri-Jean Pollet

explanation

This proposes a potential solution to data ownership and usage concerns in AI development.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #1 Challenges of cyberdefense in developing economies

Open Forum #1 Challenges of cyberdefense in developing economies

Session at a Glance

Summary

This panel discussion focused on cybersecurity and cyber defense challenges facing developing economies. Experts from various fields shared insights on key issues and potential solutions.

The panelists emphasized that while cyber threats are similar for developed and developing nations, the latter often lack adequate preparation, skilled personnel, and effective policies to respond. They highlighted the importance of capacity building, noting the significant skills gap in cybersecurity professionals in developing countries. The need for critical thinking, effective communication, and promoting collaboration were identified as crucial skills for Chief Information Security Officers (CISOs) in these regions.

Several speakers stressed the importance of international cooperation and trust-building between nations to combat cyber threats effectively. They discussed the role of artificial intelligence in both offensive and defensive cybersecurity measures, as well as the increasing sophistication of attacks targeting critical infrastructure and supply chains.

The discussion also touched on the challenges of participating in numerous international cybersecurity forums, with limited resources available to developing nations. Panelists suggested focusing on demand-driven approaches to capacity building and leveraging existing frameworks and resources rather than reinventing the wheel.

Legal frameworks were addressed, with emphasis on the need for well-trained law enforcement personnel rather than simply creating new laws. The panelists concluded that effective implementation of existing tools and laws, coupled with sustained capacity building efforts, is crucial for improving cybersecurity in developing economies.

Keypoints

Major discussion points:

– The importance of preparation, people, and policy for effective cybersecurity in developing economies

– The need for capacity building and skills development to address gaps in cybersecurity capabilities

– The challenges of limited resources and expertise in developing countries for cybersecurity

– The role of international cooperation and information sharing in improving cybersecurity

– The importance of implementing existing frameworks rather than creating new laws/regulations

Overall purpose:

The goal of this discussion was to explore cybersecurity challenges and strategies for developing economies, with a focus on practical steps these countries can take to improve their cyber defenses despite limited resources.

Tone:

The tone was collaborative and solution-oriented throughout. Speakers built on each other’s points and emphasized the need for practical, implementable approaches rather than just theoretical frameworks. There was a sense of urgency about the importance of cybersecurity for developing nations, but also optimism about existing resources and frameworks that can be leveraged.

Speakers

– Olga Cavalli: Moderator

– José Cepeda: European parliamentarian from Spain

– Merike Kaeo: CISO, board member and technical advisor

– Ram Mohan: Chief Strategy Officer of Identity Digital, former ICANN board member

– Christopher Painter: Director of Global Forum on Cyber Expertise, first cyber diplomat in the world

– Wolfgang Kleinwächter: Professor emeritus of University of Aarhus, former commissioner of the Global Commission of Stability and Cyberspace

– Philipp Grabensee: Defense counsel and former chairman of Afilias

Full session report

Cybersecurity Challenges and Strategies for Developing Economies: A Comprehensive Panel Discussion

This panel discussion brought together experts from various fields to explore the cybersecurity and cyber defence challenges facing developing economies. The conversation was solution-oriented, emphasising practical approaches to improve cyber defences in countries with limited resources.

Key Challenges for Developing Economies

The panellists agreed that while cyber threats are similar for developed and developing nations, the latter often lack adequate preparation, skilled personnel, and effective policies to respond. Merike Kaeo highlighted the significant skills gap in cybersecurity professionals in developing countries, while Ram Mohan stressed that the first point of failure in cyber incidents is often the lack of preparation among systems and people.

The discussion revealed a consensus on the critical importance of capacity building and skills development. Christopher Painter emphasised the need for technical assistance, while Wolfgang Kleinwächter argued that developing countries should define their own cybersecurity needs rather than relying solely on exported models from developed nations.

Essential Skills and Strategies

Merike Kaeo identified critical thinking, effective communication, and promoting collaboration as crucial skills for Chief Information Security Officers (CISOs) in developing regions. She also emphasized the importance of CISOs being stakeholders in developing national cybersecurity laws and regulations. Ram Mohan emphasised the importance of preparation, people, and policy as key factors in cybersecurity readiness.

Several speakers, including José Cepeda and Christopher Painter, stressed the importance of international cooperation and trust-building between nations to combat cyber threats effectively. Merike Kaeo echoed this sentiment, highlighting the value of collaboration and information sharing between countries.

Importance of Preparation and Drills

Ram Mohan and other speakers emphasized the critical role of preparation and regular drills in enhancing cybersecurity readiness. They stressed that organizations and nations should conduct frequent exercises to test their response capabilities and identify areas for improvement.

Future Cyber Threats

José Cepeda provided a forecast for cyber threats in 2025, highlighting the increasing sophistication of attacks targeting critical infrastructure and supply chains. He also discussed the potential role of artificial intelligence in both offensive and defensive cybersecurity measures.

International Forums and Frameworks

The panel discussed the challenges developing nations face in participating in numerous international cybersecurity forums due to limited resources. Christopher Painter highlighted several important forums, including the UN Open-Ended Working Group, the Global Forum on Cyber Expertise, and the upcoming WSIS+20 event. Wolfgang Kleinwächter pointed to the African Digital Compact as a model for regional strategies.

Olga Cavalli raised the question of how developing countries can find the time and resources to prepare information for sharing with colleagues, highlighting the practical challenges of international cooperation. She also noted the language barriers in accessing cybersecurity information, a point echoed by Ram Mohan, who stressed the importance of accessibility of information in the right language and at the right level.

Legal and Policy Considerations

Philipp Grabensee cautioned against hastily creating new laws in response to cybercrime, emphasising instead the importance of enforcing existing laws and building capacity. He also discussed content-related crimes and the potential negative consequences of rapidly implemented legislation. This view aligned with Ram Mohan’s focus on preparation and policy implementation rather than constant policy changes.

José Cepeda discussed the development of common certification systems in the EU, while Christopher Painter stressed the need for political will to prioritise cybersecurity.

Practical Approaches and Resources

The panel suggested several practical steps for improving cybersecurity in developing economies:

1. Utilise existing resources like the Global Forum on Cyber Expertise (GFCE) framework and materials.

2. Implement established guidelines such as Australia’s Essential Eight principles and the Center for Internet Security’s 10 essential controls.

3. Focus on practical, small steps in building cyber defence rather than overwhelming large-scale changes.

4. Encourage developing nations to set up national CSIRTs (Computer Security Incident Response Teams).

Ram Mohan emphasized the importance of taking small, practical steps in building cyber defense for developing economies, rather than attempting comprehensive changes all at once.

Changing Nature of Cybersecurity Personnel

Wolfgang Kleinwächter highlighted the evolving role of military personnel in the context of cybersecurity, noting that future conflicts may require different skill sets and approaches compared to traditional warfare.

Conclusion

The discussion highlighted the complex challenges facing developing economies in cybersecurity, emphasising the need for capacity building, international cooperation, and strategic resource allocation. While there was broad consensus on the importance of these issues, the panel also recognised the need for tailored approaches that consider the specific contexts and needs of developing nations. Moving forward, the focus should be on implementing existing frameworks, building human capacity, and fostering sustainable, locally-driven cybersecurity strategies that prioritize preparation, skill development, and practical, incremental improvements.

Session Transcript

Olga Cavalli: It’s Chris, I can hear you. Wolfgang and Rob, they have their own conversation, I could tell from here, so… Hello, hello. Okay, perfect. Okay, thank you for being… Hola. Thank you. Let’s start, because we have only one hour. Thank you. Thank you very much for being with us. Thank you, Philip. Thank you, Chris. Thank you, Meike, for being with us remotely. And finally, we have another big audience, but here are the good ones. More people are coming. But as we only have one hour, and we have a lot to talk about, I would like to start. First, thank you to all of you. Thank you, Jose. Thank you, Wolfgang. Thank you, Rob, Philip, Meike, Chris, and those of you who are here with us. We have this space to talk and exchange some ideas about cyber security and cyber defense in developing economies. We have some issues here. So, I would like first to start presenting our distinguished panelists. We have Mr. José Cepeda. He’s a European parliamentarian. He’s from Spain. We have Marike Keo. She’s from remote. She’s CISO and board member and technical advisor. Hi, Marike. We have Ram Mohan here with us. He’s chief strategy officer of Identity Digital. And he was former ICANN board member. We have Chris Painter, our dear friend Chris, from remote from the United States. He is the director of Global Forum on Cyber Expertise. And Chris was the first cyber diplomat in the world. So, he’s very well known for that. We have Professor Wolfgang Kleinwächter, also a very good friend of us. Professor emeritus of University of Aarhus and former commissioner of the Global Commission of Stability and Cyberspace, GACSC. And we have our… Our dear friend, Philippe Grabenze from Germany, he is the defense counsel and former chairman of Afilias, which is a company devoted to DNS services and internet services. So thank you all for being with us. And I would like to start from a statement from José Cepeda from the European Parliament. Jose will make some remarks in Spanish and I will translate into English. And if you want to practice your Spanish, it’s a good moment to listen to Jose. Jose, the floor is yours. Thank you.

José Cepeda: It’s okay. Good. I will. Okay. Well, thank you. Thank you all. Thank you for your invitation about this panel. It’s very important for Europe, for the Parliament of Europe to debate about cyber security and cyber defense. But we say it is very important to speak in Spanish to a Latin America area. It’s very important for us. Yes. Well, I want to speak a little bit in Spanish, especially for that area so important to the world. It is America, with our colleagues, who are doing an immense job, a great job in recent years to promote the policy of cyber security and the policy of cyber defense in all their countries. From the European Parliament, my introduction, what I wanted to contribute is a bit the work of projection that we are doing, a prospective work in a very complex world context, based on multiple military conflicts that we are also having at the border of Europe with Ukraine, for example, the whole Middle East, which is affecting us. Europe to become aware of the importance of future forecasts and where we also have to direct our cybersecurity policies to protect ourselves. In this sense, I would like to share with all colleagues the work that is being directed towards hybrid cyber-threats, starting, on the one hand, with the multi-channel attacks, which we call them like this, which are, in the end, the famous state actors sponsored by states that are working in a direct way, in a multiple way. First, they generate disinformation structures and, on the other hand, what they do is spread them. In this way, they also make the countries unbalanced in some way. Of course, we are working on this analysis. We are also talking about technical cyber defense, which is a very important element where artificial intelligence is opening a new path, let’s say, for the bad guys. And what we also have to do in the field of defense is to work to protect ourselves, generating cyber-shields also based on artificial intelligence that make forecasts of possible cyber-attacks and, above all, high-level structures to respond in real time. I mean that artificial intelligence, as a technology, is not only going to serve the bad guys, but also all European structures, for example, are already working in that direction and in that direction. There is also a very important element that is linked to what is mass espionage. I am talking about critical platforms based on the satellite network, encrypted communications, data processing centers, which, for example, according to the European Agency, European Cybersecurity Agency, CENISA, will use methods where the use of advanced, stealthy, deep malware, for example, using the infiltration techniques in Firmware, will be very common. Personally, I am very concerned about the training of countries in the field of putting themselves at the forefront, for example, in quantum computing. That is a question that we are going far behind. I mean, there are many private companies that have it, and yet there are many governments that do not have it, because large investments are needed, and that is also one of the very important issues that we are going to work on. The forecast for next year is also based on the automation of cyberattacks around polyphonic malware, based on artificial intelligence, with a malicious mutant code, that is, a code that is inserted in critical infrastructures, and that, possibly thanks to the technology based on artificial intelligence, is mutating as possible cyber defense structures are developed, in this case, by the institutions or governments. That is going to be very complicated, but precisely because of that, it is also very important to know how it will evolve in the coming years. Regarding autonomous cyberattacks, talking about bootnet systems, distributed attacks based on DD2, for example, without a doubt, they will reach new scales, they will be programmed and adapted precisely, dynamically, based on technology based on artificial intelligence. Perfect. Very good, very good. Thank you, José. I will translate. José is explaining to us all the preventions. and activities that they are doing about cyber security and cyber defense forecast for 2025. So first he spoke about the escalation of hybrid threats by integrated multi-channel attacks from state and state-sponsored state actors that combine cyber attacks with disinformation, digital sabotage and kinetic activities. As an example, he explained the manipulation of ICS networks to dispute power followed by disinformation campaigns to maximize social impact. Then he explained to us about the technical cyber defense about early detection systems with machine learning algorithms that will be necessary to identify patterns in these hybrid actions. And then he explains about targeted mass spionage, critical platforms such as satellites and critical communications and distributed data centers will be prime targets. According to EISA, what’s EISA? That’s the European Cyber Security Agency. Methods will include the use of advanced deep stealth, malware and fear web infiltration techniques. Then he explained to us the automation of cyber attacks based threats and artificial intelligence based threats. Not only artificial intelligence used by bad actors, also for good actors. Polyphonic malware with artificial intelligence. Attackers will use artificial intelligence to generate malicious code in real time that evades traditional detection solutions. This type of malware will be especially problematic for environments that are not updated or do not implement artificial intelligence based adaptive systems. Then he explained to us about autonomous cyber attacks. attacks, about botnets, systems that distributed attacks, such as the denial of service. We reached new scales by being programmed to dynamically adapt defense responses. And then finally, he explained about technical cyber defense, CM, security information and event management, and SOAR, security orchestration, automation, and response platforms will be essential for managing automated responses. All these are issues about the cyber security and cyber defense forecast that they are

Olga Cavalli: preparing for the next year. It seems like we have a round of comments, if that’s possible. OK. I would suggest, after these very interesting comments that Jose made about the forecast for cyber defense in 2025, I would like to go to the questions to our panelists. Allow me to find my script. So Marike, are you there? Now I can hear you because. Marike, you’re an experienced CISO, so you’re a woman devoted to cyber security. And based on your experience, which are the skills that a CISO must have, especially in a developing country, to deal with challenges that developing economies usually face in relation with cyber security and cyber defense, and also after what Jose shared with us, which is the threats that are forecast as they see that for the next year. And welcome. Thank you very much for joining us.

Merike Kaeo: Yeah. Thank you very much for the question. Yeah, being a chief information officer is a position that has evolved over the years. And it can mean different things to different people. However, to me, the role has always meant that you are the person responsible for developing and implementing the strategy to provide resilience and trustworthiness in our digital environments. And in developing countries, where sometimes they are still evolving to create effective regulation and also national cyber security laws, you are most often also a stakeholder and should be in the room to be a voice, and especially so if you’re a CISO in critical infrastructure. So I’m going to list three primary skills. One of them is that you absolutely must have critical and strategic thinking. And part of that is because in developing economies, you’re often faced with challenges that include lack of resources. And it’s not always financial. There’s really, I think, the biggest challenge is a skills gap, where you just don’t know or you don’t have the people that can help with overall cyber security roles. And this lack of resources and effective team means that sometimes the CISO has to be the security architect, the security operations team, the security operations center, the incident response team, and the threat intelligence team. They have to do everything while they’re trying to prioritize what needs to be done and how do you actually get it done. So by utilizing strategic thinking, a CISO in developing economies can determine when to outsource and which tasks need to be prioritized. Most developing nations or companies that provide cyber security help will have usually a list of the top five or 10 items to do. And they think, oh, that’s not so much. Well, in many developing economies and with the lack of resources, you might only be able to one or at most two of these items. So which ones do you choose? And when outsourcing, it is extremely important to be strategic and ensure that capacity building training is included so that developing economies can build internal knowledge and expertise to provide for future opportunities within their own countries. It can also be beneficial because sometimes in developing countries, there are language constraints. So being able to communicate. in your own language. The second skill is having effective communication skills because you must be able to communicate critical risks that are relevant to your organization, industry, or nation state. And as I previously mentioned, you are a stakeholder typically in perhaps developing legal constructs and also regulations within your developing country. And effective communication can also help build trust and collaboration, which is what brings me to the third extremely important skill of being able to promote collaboration and information-sharing. This is absolutely critical in developing economies. We all learn from each other. I’ve had the privilege to work in a very global environment, and I know that the Pacific Island nation-states, Southeast Asia, Latin America, Africa, the Balkans, I mean there are many, many information-sharing groups that are region-specific. And this very much helps developing economies because within a region you usually have different levels of maturity when it comes to cybersecurity, either defense or understanding or skills. And so it’s not even that you just build up sharing groups within your own sector, being financial or health care or what have you, but sometimes also you have similar issues based on geographic region. So that information-sharing and collaboration as to which threats you’re most vulnerable to, right, what is actually happening in your region is extremely important. And also what is critical regarding collaboration is that you must know who to escalate when, as a CISO, when you see that there’s nation-state relevant information that is specific and that can target your specific nation-state or region. So to sum it up, I think the three skills that are really important are one, critical and strategic thinking, two, effective communication, which means with the technical sector, with policymakers and regulators, and also three, which is extremely important to me, is promoting collaboration and information-sharing. So thank you for that, for giving me a chance to enumerate on those aspects.

Olga Cavalli: We always go to the one of the things that we will talk always talk about capacity building learning and exchanging information I think this is so important, but sometimes, and having work in several technological environments. We don’t have that much time, and have very few resources. So, sometimes, not that you don’t want to share the information is you don’t have the time to prepare the information to be shared to other colleagues because sometimes you have to reshape it or prepare it to be easily exchanged among colleagues. So that’s something that has happened to me and maybe something that we lack of the time or somehow a resource that could help us is making the communication easier. So, expert in critical infrastructure which I consider DNS and critical infrastructure. And you are the chief strategy officer of a very big company that has an infrastructure, start all over the world, and that was security is is a main issue issue because if the DNS doesn’t work. Most of the activities that we go on the internet won’t be possible to perform. So, how do you think that in a developing economy. How is this critical infrastructure being protected how how which measures should the local people and actions can be taken to protect this national and critical infrastructure from cyber attacks.

Ram Mohan: Thank you. Can you hear me. Okay, great. Thank you. We have the privilege of serving both developed nations as well as nations that are developing. Right. So we run the the critical infrastructure of australia.au we’re the designated service provider and we’re actually designated a critical infrastructure provider for Australia, a developed nation. But we also do this for many other smaller countries, countries in the Caribbean. We do this for Belize. We do this, we’re gonna be doing this shortly for anguilla.ai in just a little while. And what you find is that the nature of the threat is not any different. The kinds of threats that developed nations encounter and the kinds of threats that developing nations encounter are no different. The scale and size of the threats are also often not much different. And what is different is preparation, people and policy. Those are the three things that distinguish the responses of a developed nation from a developing one. America already spoke about resources and you spoke Olga about accessibility of information. It’s not enough to just have the data on what the threats are and how to respond. It is important to have it accessible in the right language, at the right level, you have to calibrate it. But in reality, in my perspective, when the problem actually happens, when you have a nation under attack, a nation’s assets, critical assets under attack, when you have the banking system that is crucial being targeted, when you have telecommunications networks that are in trouble. The very first thing that fails are the systems and the people who are unprepared. And it doesn’t matter if you have great resources, great knowledge, great education, but you will find, and this is true even in developed nations, but it’s especially true in developing countries, there is no preparation for it. They have read the paper, they have seen the website, they have even had a discussion at the cabinet level on the DDoS attack that had happened or the fact that you need to secure your routers, right? So they have the theoretical knowledge, but when the attack happens, they’ve never drilled about it before. So what you find is that that preparation is the crucial difference between a developed nation’s response to a cyber attack and a developing nation’s response to cyber attack. The second thing is people. Often you will find in developing countries, the people who have the knowledge to distinguish whether a problem that is occurring, to distinguish whether that is an attack or merely an error, there are only a few people who know it. And if those people are not available or on vacation, right? I mean, I can tell you a story in one of the countries that we serve, there was one primary person responsible for cyber defense and his wife was giving birth, he was in the hospital, the country came under attack and the systems went down. because he had to choose between being there for the baby or being there for the country, and he chose the baby, right? But that’s a, it’s a real life issue, right? So people, second thing, you just don’t have enough resources in that area. The third part is policy. You find in developing nations, governments, they look at, say, the UNSDG. They look at the various protocols or capability and maturity models. JCSC had a bunch of norms that they developed on safety in cyberspace. They are excellent frameworks, but you need governments to actually take those frameworks and implement policy so that it gets into curriculums, it gets into training systems, it gets into other governmental departments. It becomes a priority for those departments. An example there, if you look at Australia, for instance, several years ago, they got really concerned about cyber defense, and the government came up with what they called the Essential Eight. These are eight essential principles for cyber defense. They include well-known things like two-factor authentication, et cetera. But what they did was they implemented policy. They said every government department within 12 months must implement the Essential Eight. And then two years later, they said every critical infrastructure provider must certify implementation of the Essential Eight, right? So I think what you need in developing economies for success here or for a proper cyber defense strategy is… reparation, people, and policy.

Olga Cavalli: And then the policy, the eight things, very interesting. Although I think that also developed companies and countries are also attacked. So that caused my attention because there are nations and companies that have a lot of resources to have a very secure infrastructure, even though they get under attack. And so developing economies are in a much vulnerable situation, yes. So it’s preparation is the issue, okay. So I would like to go to Jose, you share with us more of your… I don’t know if you can continue commenting. We go with the forecast made by… I don’t speak, no, no, no, no, no, no. Yes.

José Cepeda: Well, the next points I want to just say about cyber defense, and it’s very, very important is collaboration at a level to international. Spanish said that’s the oldest possible with… So, yes, sorry, I try in Spanish and in English. Well, no, I’m not going to say what we have, but cyber defense and international collaboration, especially for our listeners and collaborators in the Latin American space. It is very important to convey that there is a unique structure that can unite everything. is trust between countries, trust between governments, trust to develop international cooperation policies. In all my experience that I have had over the years in work in Latin America, and it is something that we are also starting to develop in a very important way in the European context, just a few weeks ago, the Finnish Prime Minister, Nyn Nistro, presented a report talking about cooperation and European intelligence to unite the 27 countries of the European Union. Well, in that context, joint work of NATO, of the Organization for the Transatlantic Treaty with the European Union, to promote a great cyber coalition, precisely based on trust, to be able to work in a single European common intelligence system. Here I have some colleagues from the European Parliament, Galvez and some others, who have been working in a very important way, the NIS2, which is a cooperative environment, also a series of rules that are setting the pace for the 27 countries, a series of standards based on cybersecurity, which will undoubtedly be the environment of the future, such as the Cyber Security Matrix Certification, which is very important, because it implements common certification systems throughout the context of the European Union, precisely speaking of critical infrastructures for the 27 countries. Well, in short, I don’t want to extend much more. I think that, especially thinking about next year, in 2025, the main cyber threats will be based on a greater sophistication of cyberattacks, the use of artificial intelligence as a weapon, both offensive and defensive, an increase in the risks. associated to technology, based on the Internet of Things and, above all, the supply chains. And the cyber defense strategies must include active defense measures, must include predictive intelligence, based above all on artificial intelligence, and a series of solid regulatory frameworks based on that international cooperation that I mentioned, and, above all, a protection of the physical, critical infrastructures, also based on new technologies. I believe that the future bet is going to be a reality in the coming years, and it has a lot to do with quantum computing. All countries are witnessing quantum computing in a serious way, precisely to develop the entire structure of artificial intelligence, and, of course, the bad guys, to name a few, who are already using it, will have much more resources within their reach. And we play a lot in this, not only the government structures at the civil level, but, without a doubt, all the armed forces and all the structures also at the military level and at the level of the defense of all our countries. So, what Jose explained is a very interesting issue, is that there is a thing, which is trust, trust among different countries. And he explains that a presentation made by the Minister of Finland and Europe, is a piece of personal information by the Minister of Finland, that Ursula von der Leyen expressed that cooperation, want to create an intelligence European cooperation agency, something like that. It’s a project based on trust and there is a joint work with the organization, the Coalition based on Trust, and a unique way of harnessing all these threats. We think that for next year we expect more sophistication in all the attacks, that the use of artificial intelligence will be not only for defense, but also for offensive attacks. There will be also attacks done through Internet of Things, things connected to the Internet, in the supply chains using Internet of Things. So all the strategies of cyber defense must use predictive intelligence, and also the regulations frameworks must be based on trust, and also the quantum computing must be considered, because this big capacity that the computers will have in the future will have a very big impact in cyber security and cyber defense. Not only for the countries, but also for the military forces. Okay, thank you very much, Jose. Now I will go to my… There’s something in the… How…

Olga Cavalli: We finished this gap that exists in human research in cyber defense. Just for you to have an idea, my university, we have opened a new career, cyber defense, and we had one person in one month. So the high demand people didn’t have some time. So Wolfgang, what do you think?

Wolfgang Kleinwachter: It’s indeed a difficult question, and it was already mentioned by previous speakers. We have this gap, the skill gap, which is clear. We have the resource gap. And then what Ronald said, preparation, people, and policy are the differences, but it’s a complex problem, which you cannot settle with one hit. So that means you have a number of different initiatives, which pull together a stream which will enable the developing countries or the global South to step by step to close the gap. And certainly, it needs also help from the global North. Help, I would say, quote unquote, in quotation marks, because the best help is if you just provide resources which enable those countries to find their own way, because otherwise they are just a target on the export of models. And I see here a problem, because the whole world agrees capacity building in AI and capacity building in AI, in every domain, is extremely important. There is no disagreement. But what we have seen in the General Assembly of the United Nations this year, we had two resolutions, one sponsored by the United States and one sponsored by China, and the Chinese resolution in particular is about AI capacity building in third world countries. So and it got the overwhelming support. Americans even supported the Chinese resolution and China supported the U.S. resolution. So that’s fine. There is no reason to be against it, but there is a risk in it that just, you know, capacity building organized by China will include the export of the Chinese model and the capacity building offered by the United States will include the export of the American model. So I think the challenge is really, and this goes to policy and people, but that means developing countries and the global south has to develop its own strategy and to define exactly what they need. And if they have a list which specify exactly the needs, then they can ask who can help to set up the needs so that you are not dependent on the big brother or the big sister or the big uncle, but you start from your own needs. And I think this is an important point and has to guide the strategy, the long-term strategy for the global south. That’s why the African digital compact is so important because, you know, they had their own digital compact which was in the cradle of the global digital compact in the United Nations, but the African digital compact has specified the specific needs of Africa. And what is relevant for the general digital strategy for those countries is also important for the defense sector, for AI, because what Ron has said, you know, there are no, you know, if it comes to attacks, it’s no difference whether it’s developing countries or developing countries. That’s the same. The question is how you react, how you prepare the people, and how you have policy in place, and in particular this preparation aspect, so that you have, you know, not only one person, but, you know, a backup for the person that is in a vacation, the other one should be in the office. So I think that’s a problem. But if it comes to another aspect, which includes also the preparation of military personnel, so I was in another workshop a couple of months ago, where they discussed, you know, what is the type of the soldier of the future, how to train the soldiers of the future. You know, in the 20th century, you needed strong young men who could, you know, move very fast. But today, you know, if you have a young man who has over it and sits and is very capable with this computer and with the keyboard, he’s probably the better soldier. So I think this will, the whole AI revolution in the military field will change also, or it’s a challenge to understand how to train the soldiers of the future. So the best thing is you do not need them. So that means peace is always better as well. But you have to be prepared, and if you have the wrong soldiers, then you will lose the war in the 21st century. I like this soldier of the future concept.

Olga Cavalli: I think it’s very interesting to think about, but about this independence that the countries should have about capacity building. There is a big challenge because the technology is developed in few countries. I mean, I would say mainly in two countries and all the rest of the world is producing that technology. So capacity building comes also based on which technology are we using. I would like to ask Chris, Chris, are you there? Thank you for being with us. It’s a pity not to have you here at the IGF. You’re an expert in cyber diplomacy, the first cyber diplomat in the world and international relations. Which are the international and regional debate spaces our developing economies should focus on about what is happening, especially considering that sometimes you don’t have the resources to follow the spaces and to go to all the meetings. Which one would you say are the most important ones?

Christopher Painter: Olga, thanks. Good to be with you all. Sorry, I can’t be there in person. I wish I was, but sadly I’m not. I’d say a couple of things. First of all, just that is a real problem that there’s a myriad of different forums, things that people should, particularly developing world countries, global South should be in, should be participating in not just to gain the knowledge, but also to gain, to share their experience because that’s critically important. And just to the point that others have said, when we talk about capacity building, I view that as a foundational element for everything else in cyberspace. If you don’t actually have the capability, both the policy capability and the technical capability, you can’t really participate either in these forums well or secure your own systems or respond to attacks. So that really is, I think, the connective point. And for me, when I’ve seen that, as someone, as it was also noted, as Rahm noted this, you need a number of different factors, including the political will in the country too, not just the technical people saying they want the training, but also the political. commitment that this is a real priority for them. And that’s becoming more real, but that’s been hard. And that’s why for the Global Forum on Cyber Expertise, for instance, one of the groups that I do a lot of work with and have for years, which is a capacity building platform that has 60 countries, a couple dozen companies, civil society, and academia, we’ve moved very much to a demand-driven approach, as was said, that ask the global South what do they want, rather than saying, here’s what you get. And that’s a much more sustainable model. But in terms of the forums that you mentioned, there is obviously the UN, the Open-Ended Working Group, in particular, which is dealing with cyber. I’d say that when that first met, the first one of those, and now about eight years ago, I was struck by the number of developing world countries who came and said, look, we’d love to debate things like norms and these esoteric concepts, but what we really need is help. We need help with our own capacity. We need help in building our institutions and our technical capabilities. So that’s a critical one. I’d say there’s some good news story on that in trying to get more global South participation, particularly a women in cyber program that’s been administered by GFC, but many countries behind it. And there’s been lots of women from the developing world have been going to and participating and making interventions in that session, so that’s important. There’s been training that UNODA has done for cyber diplomats, especially in developing world countries. And of course, there’s a number of others, Diplo and others have done that, and that’s important. Coming up this year is the WSIS plus 20, which will be a big deal, I think. It’s hard to believe that we’re already at plus 20. I remember plus 10, but that’s an important one, I think, for developing world countries to be at. The GFC, our platform, we have 220 members and partners. We have regional. What’s important is we’ve created regional hubs to allow these countries to more easily interact, including in the Pacific Islands, ASEAN. the Americas region with OAS, African Union, and so that, and the Baltics. So that gives, that’s some way these institutions are trying to come to the community. The ITU, I think, is important, more on the technical side. FIRST, for first responders and CERTs. The Council of Europe and the UNODC for cybercrime issues. And so, as you noted, the problem is there’s so many different forums that these folks could attend. You need the right people to attend. You can’t just have your representative in New York, for instance, in the UN, go to all these meetings unless they have expertise. And you need the ability to attend them. And even for big countries, trying to attend all these meetings, these plethora of these meetings is hard. For small countries where it’s one person, and I’ll give you an example, a Pacific Island country, wonderful person from Fiji who is both doing their cybersecurity there, but she’s also traveling to New York and these other forums to try to participate. It’s very difficult on them to be able to split that time. And so we need to figure out how we can more constructively engage with the developing world and allow them to be part of these forums because it’s critically important. And obviously the IGF is another forum as well. So that’s a real challenge because we can’t simply have the developing world at one level and the rest of the world at another level. We need to make sure, for practical reasons too, even for the developing world’s standpoint, they need a lot of countries to work with them to be able to go after cyber threats, which are often routed through these countries if they don’t have strong laws or capabilities.

Olga Cavalli: Thank you, Chris. And the good thing is that now most of the events are hybrid and many things get recorded. I know it’s not the same. It’s lovely to be here and interact with people, share coffee, share a sandwich. But if you want to do research or you want to know about something, you can find information online. And luckily, several languages, which And language is also a big barrier. At least for Latin America, it’s a big barrier. Not everyone, not everyone speaks English. In order to understand foreign speakers or read clearly the documents. And now I would like to go to my dear friend, Philip Graves. I say Philip is a defense counselor and an expert also in DNS structure, which is a critical infrastructure because he was former chairman of Affiliate who was a company. Now it’s merged with another company, but it’s a global company that manages DNS infrastructure. Philip, how do you think developing economists can find guidance or reference in order to legal update their legal frameworks who will fight against cybercrime around prevention as an important thing, regulations and policy? So how can that be really considered and up to date? So the agility in the development of this regulations and welcome and it’s a pity that we don’t have you here, but we are seeing you online.

Philipp Grabensee: Thanks for having me. And this is a tough question to answer in the remaining two, three, four minutes I have, but let me try to summarize that a little bit and make a few points regarding that. So far, we have basically talked about crimes against computer systems, but another big part is of cybercrime and fighting crimes is the content related crimes. And let me, and I think we cannot learn too much or of course we can learn, but we can also learn from the mistakes, which has been made within. legal frameworks, because every time something horrible happens, and I will give you an example for that, the society and the people are calling for new laws and new tools for enforcement agencies and increasing of laws, and a lot of times this crying and asking for new laws has very negative side effects, and I think the problems are really somewhere else, or a lot of the problems are somewhere else, and I think in the discussion we had, you know, some, you know, it has become very clear that the problems are not so much the laws or the framework or technology, but it’s really that people are unprepared, or the problems are the, you know, what Marek calls the skills gap you have, and as much as you have the skills gap in, you know, people in the technology field, you have also the skills gap or people unprepared in law enforcement, and so it’s not really the technology, it’s not so much a framework, it’s really about, you know, the skill set missing, and the preparation of the people, so the example I’m giving you here, it’s a, you know, an example of, you know, horrible cases of, you know, possession of child sexual abuse material in Germany, so there were horrible cases in the news, a lot of big outrage in society, and laws were increased and new laws were passed, and, you know, reform of the German criminal law, which in the end led to unintended consequences for teenage sexual expression to the digital media, because suddenly all kind of, you know, exchange of information and exchange of pictures from teenagers, you know, on media were suddenly criminalized and a crime, so that was a really negative side effect, you know, of that calling for new regulations. And the real problem was, why was law enforcement not effective before, you know, against, or why is law enforcement, or where’s the weaknesses still in law enforcement to fight against, you know, these horrible crimes? The problem is you’re lacking well-trained and, you know, the capacity of well-trained people in law enforcement. That was really the problem. And this problem has not been solved by increasing, you know, increasing fines or introducing new paragraphs, making certain behavior, or criminalizing certain behavior. It is the same, what counts, what Rom said, what Marengo said, it’s really the capacity of the people, the capacity building, the training of the people, the skills gap. In the specific example of, you know, sexual abuse material, we were just lacking in, you know, in Germany, we were lacking the, you know, the police enforcement officers who were prepared to the exposure of traumatic material, who had the psychological training to deal with that. We had not enough people to do it, and not prepared people to go through the internet and look at this content. So that’s why, you know, still it’s a problem. So what you need is really also here, of course, frameworks help, and also you can have then framework for capacity building. But it’s not so much a frameworks, and it’s not so much a laws. It’s really about, you know, it comes down, so, you know, still it comes down to the people. Of course, it’s official intelligence helps you to, you know, go through the internet and identify, you know, potential, you know, crimes. But in the end, it has to be people who look at it and bring it to prosecution. And that what really, you know, what really, you know, what really helps protect the victims of those horrible crimes. So I can only echo what my colleague says. Also, in regard to content-related crimes and enabled and related crimes, it’s the same as what accounts for crimes against the computer system itself.

Olga Cavalli: It’s not only having the policy, it’s making it work, making it relevant. Because if not, how to make it relevant? So we have five minutes. Is anyone in the audience who would like to add something or make any comment? Or we make a final one minute per speaker, and we have to leave the room. I will start with Ram, who’s looking at me directly.

Ram Mohan: Thank you, Olga. This is a really terrific set of comments that have come through from everybody. What strikes me as a useful next step is to think about collating the information that has happened in a session like this and to go back and look at developing countries, at least those who you know, who we know, and check whether this will actually work. You know, you see what GFC is doing, what Chris was talking about what GFC is doing. They already have a framework. They already have material that is available. And I think that we ought to look at what’s already done, not reinvent the wheel, in building cyber defense. Effective cyber defense is not new cyber defense. Effective cyber defense is cyber defense that has already worked, and more importantly, defense that has already failed and therefore been patched. So that, I think, is kind of the way to look at it. Let’s be practical. Let’s take small steps, because the large steps will overwhelm developing economies.

Olga Cavalli: Thank you. Wolfgang?

Wolfgang Kleinwachter: It’s not a theoretical question. It’s a very practical question. It’s just implementation. You have to do it.

José Cepeda: Thank you. Well, I see three pillars. People, policies and preparation. It’s very, very important. Three pillars are necessary to all countries. This is not the future. It’s the present. It’s necessary now.

Olga Cavalli: Thank you, José. Marika, your final comments?

Merike Kaeo: Chris had mentioned first, and I had the privilege of helping a developing nation set up their national CSIRT. I think that is critical, and there are many, many guidelines that exist that also talk about the legal constructs and the regulatory frameworks that countries should have within their own culture, within their own legal systems, to build up this national CSIRT that I think will greatly help developing nations.

Olga Cavalli: Thank you very much. Chris, your last comments?

Christopher Painter: I think it goes back to what we were saying before. This is a critically important area. You need the political will in countries. You need sustainability and continuity, which is always difficult, and you need non-duplication. I think as we try to match resources with the needs, and the needs are great, there are a number, and I totally agree, don’t recreate the wheel. There’s lots of stuff out there. It’s an important area. The Sybil portal that the GFC runs has hundreds of projects, calendars, things that I think are helpful, and it’s publicly available. It’s not limited just to the members of the GFC. It’s been linked to the UNIDIR cyber policy portal at the UN portal, and so that’s, I think, a really good cross-linking resource. And the last thing I’d just say is, On the topic of things that are out there, I also am on the board of a nonprofit called the Center for Internet Security, which like the essential aid has the 10 essential controls. So a lot on cyber hygiene is available. So I agree. Coaliting what’s there rather than recreating things is critical.

Olga Cavalli: Thank you. Philipp, your last comment.

Philipp Grabensee: I think I can just really echo the last four comments. I think we all came out, you know, in the end to the same same opinions implementing not just, you know, the same thing, not recreating the wheel means also not always making new laws, new laws in force, recreating not recreating the wheels means in law enforcement, you know, just enforcing existing laws and building capacity for people to enforce those laws. That’s the way to go forward. And also because existing laws have always shown that they, you know, they have gone to the critical test of, you know, how they how they relate to human rights and constitutional rights. So always creating new laws, you know, always, you know, puts a lot of danger. You know, talking as a defense counsel puts a lot of danger, you know, because then those laws has to be under, you know, has to be under looked at, you know, from from all kind of perspective. And a lot of things go wrong when you lost being passed, especially in a hurry. So not recreate the wheel, not just do new words, implementation and enforcement of existing tools and laws. I think that that’s the way to go ahead.

Olga Cavalli: Okay, please help me applauding our dear friends and colleagues for a very interesting session. And for the remotes, don’t go away, we will take a picture. Don’t go away. We take a picture with you. Oh, thanks to you. ¿Me sacaste una foto? Yes. Espero que se vean ellos. Estoy aquí en el medio, ¿no? Sí, por favor. All right, thanks, have a great day and see you wherever, maybe next year in Oslo, who knows. All right, see you guys. Take it easy. Happy Christmas. Happy season. See you soon.

M

Merike Kaeo

Speech speed

127 words per minute

Speech length

775 words

Speech time

363 seconds

Skills gap in cybersecurity workforce

Explanation

Merike Kaeo highlights the significant challenge of the skills gap in the cybersecurity workforce, particularly in developing economies. This gap refers to the lack of trained professionals who can effectively handle cybersecurity tasks and responsibilities.

Evidence

Kaeo mentions that in developing economies, a CISO might have to perform multiple roles due to lack of skilled personnel, including being the security architect, security operations team, incident response team, and threat intelligence team.

Major Discussion Point

Cybersecurity challenges for developing economies

Agreed with

Ram Mohan

Christopher Painter

Wolfgang Kleinwachter

Agreed on

Importance of capacity building and skills development

Critical thinking and strategic prioritization of tasks

Explanation

Kaeo emphasizes the importance of critical thinking and strategic prioritization of tasks for cybersecurity professionals, especially in resource-constrained environments. This skill allows professionals to determine which tasks are most crucial and how to allocate limited resources effectively.

Evidence

She mentions that in developing economies, a CISO might only be able to implement one or two out of the top five or ten recommended cybersecurity measures due to resource constraints.

Major Discussion Point

Key skills and strategies for cybersecurity

Collaboration and information sharing between countries

Explanation

Kaeo stresses the importance of collaboration and information sharing between countries in cybersecurity efforts. This approach allows countries to learn from each other’s experiences and share best practices, particularly beneficial for developing economies.

Evidence

She mentions the existence of various region-specific information-sharing groups in areas such as the Pacific Island nation-states, Southeast Asia, Latin America, Africa, and the Balkans.

Major Discussion Point

Key skills and strategies for cybersecurity

Agreed with

José Cepeda

Christopher Painter

Agreed on

Need for international cooperation and trust

R

Ram Mohan

Speech speed

125 words per minute

Speech length

909 words

Speech time

436 seconds

Lack of resources and preparation for cyber attacks

Explanation

Ram Mohan highlights that developing economies often lack the necessary resources and preparation to effectively respond to cyber attacks. This includes not only financial resources but also human resources and established protocols.

Evidence

Mohan provides an example of a country where there was only one primary person responsible for cyber defense, and when that person was unavailable due to personal reasons, the country’s systems were vulnerable to attack.

Major Discussion Point

Cybersecurity challenges for developing economies

Agreed with

Merike Kaeo

Christopher Painter

Wolfgang Kleinwachter

Agreed on

Importance of capacity building and skills development

Preparation, people, and policy as crucial factors

Explanation

Mohan emphasizes that preparation, people, and policy are the three crucial factors that distinguish the responses of developed nations from developing ones in cybersecurity. He argues that these factors are more important than the nature or scale of the threats themselves.

Evidence

He mentions Australia’s ‘Essential Eight’ principles as an example of effective policy implementation in cybersecurity.

Major Discussion Point

Key skills and strategies for cybersecurity

J

José Cepeda

Speech speed

123 words per minute

Speech length

1765 words

Speech time

859 seconds

Need for trust and international cooperation

Explanation

José Cepeda emphasizes the critical importance of trust between countries and governments in developing international cooperation policies for cybersecurity. He argues that this trust is fundamental to creating a unified structure for cyber defense.

Evidence

Cepeda mentions the recent presentation by the Finnish Prime Minister about cooperation and European intelligence to unite the 27 countries of the European Union, and the joint work of NATO with the EU to promote a great cyber coalition based on trust.

Major Discussion Point

Cybersecurity challenges for developing economies

Agreed with

Christopher Painter

Merike Kaeo

Agreed on

Need for international cooperation and trust

Development of common certification systems in the EU

Explanation

Cepeda discusses the development of common certification systems for cybersecurity in the European Union. These systems aim to implement standardized certification across all 27 EU countries, particularly for critical infrastructures.

Evidence

He mentions the Cyber Security Matrix Certification as an important initiative in this direction.

Major Discussion Point

Legal and policy considerations for cybersecurity

C

Christopher Painter

Speech speed

180 words per minute

Speech length

1050 words

Speech time

349 seconds

Importance of capacity building and technical assistance

Explanation

Christopher Painter emphasizes the critical importance of capacity building and technical assistance in cybersecurity, particularly for developing countries. He views this as a foundational element for everything else in cyberspace, enabling countries to participate effectively in international forums and secure their own systems.

Evidence

Painter mentions the Global Forum on Cyber Expertise, which has 60 countries, companies, civil society, and academia as members, and uses a demand-driven approach to capacity building.

Major Discussion Point

Cybersecurity challenges for developing economies

Agreed with

Merike Kaeo

Ram Mohan

Wolfgang Kleinwachter

Agreed on

Importance of capacity building and skills development

Differed with

Wolfgang Kleinwachter

Differed on

Approach to cybersecurity capacity building

UN Open-Ended Working Group as an important forum

Explanation

Painter highlights the UN Open-Ended Working Group as a crucial forum for discussing cybersecurity issues, particularly for developing countries. He notes that many developing countries have used this forum to express their need for capacity building assistance.

Evidence

He mentions that when the Open-Ended Working Group first met about eight years ago, many developing world countries expressed their need for help in building their institutions and technical capabilities.

Major Discussion Point

International forums and frameworks for cybersecurity

Agreed with

José Cepeda

Merike Kaeo

Agreed on

Need for international cooperation and trust

Need for political will to prioritize cybersecurity

Explanation

Painter stresses the importance of political will in countries to prioritize cybersecurity. He argues that without this commitment from political leadership, efforts to improve cybersecurity capabilities may not be successful.

Major Discussion Point

Legal and policy considerations for cybersecurity

W

Wolfgang Kleinwachter

Speech speed

128 words per minute

Speech length

739 words

Speech time

344 seconds

Developing countries should define their own cybersecurity needs

Explanation

Wolfgang Kleinwachter argues that developing countries should define their own cybersecurity needs and strategies, rather than relying solely on models exported from developed countries. This approach ensures that the strategies are tailored to the specific context and requirements of each country.

Evidence

He cites the African Digital Compact as an example of a region-specific strategy that specifies the particular needs of Africa in the digital realm.

Major Discussion Point

Cybersecurity challenges for developing economies

Agreed with

Merike Kaeo

Ram Mohan

Christopher Painter

Agreed on

Importance of capacity building and skills development

Differed with

Christopher Painter

Differed on

Approach to cybersecurity capacity building

African Digital Compact as a model for regional strategies

Explanation

Kleinwachter highlights the African Digital Compact as a positive example of a region-specific digital strategy. He suggests that this model could be useful for other developing regions in crafting their own cybersecurity strategies.

Evidence

He mentions that the African Digital Compact was developed in the context of the global digital compact in the United Nations, but specifically addresses the needs of Africa.

Major Discussion Point

International forums and frameworks for cybersecurity

P

Philipp Grabensee

Speech speed

154 words per minute

Speech length

943 words

Speech time

366 seconds

Caution against hastily creating new laws in response to cybercrime

Explanation

Philipp Grabensee warns against the hasty creation of new laws in response to cybercrime incidents. He argues that this approach can lead to unintended consequences and may not address the root causes of the problem.

Evidence

Grabensee provides an example from Germany where new laws passed in response to child sexual abuse material cases had unintended consequences for teenage sexual expression in digital media.

Major Discussion Point

Legal and policy considerations for cybersecurity

Importance of enforcing existing laws and building capacity

Explanation

Grabensee emphasizes the importance of enforcing existing laws and building capacity in law enforcement, rather than constantly creating new laws. He argues that the real problem often lies in the lack of well-trained personnel to enforce existing laws.

Evidence

He mentions the example of Germany lacking police enforcement officers who were prepared for exposure to traumatic material and had the psychological training to deal with it in cases of sexual abuse material.

Major Discussion Point

Legal and policy considerations for cybersecurity

O

Olga Cavalli

Speech speed

134 words per minute

Speech length

1475 words

Speech time

657 seconds

Need for developing countries to participate in multiple forums

Explanation

Olga Cavalli highlights the challenge for developing countries to participate in multiple international cybersecurity forums. She notes that while participation is important, it can be difficult due to resource constraints.

Major Discussion Point

International forums and frameworks for cybersecurity

Agreements

Agreement Points

Importance of capacity building and skills development

Merike Kaeo

Ram Mohan

Christopher Painter

Wolfgang Kleinwachter

Skills gap in cybersecurity workforce

Lack of resources and preparation for cyber attacks

Importance of capacity building and technical assistance

Developing countries should define their own cybersecurity needs

Multiple speakers emphasized the critical need for capacity building and skills development in cybersecurity, particularly for developing economies. They agreed that addressing the skills gap and providing technical assistance are fundamental to improving cybersecurity capabilities.

Need for international cooperation and trust

José Cepeda

Christopher Painter

Merike Kaeo

Need for trust and international cooperation

UN Open-Ended Working Group as an important forum

Collaboration and information sharing between countries

Speakers agreed on the importance of international cooperation and trust-building in addressing cybersecurity challenges. They highlighted various forums and initiatives that facilitate such cooperation.

Similar Viewpoints

Both speakers emphasized the importance of focusing on implementation and capacity building rather than creating new laws or frameworks. They argue that effective enforcement of existing measures is more critical than constantly developing new ones.

Ram Mohan

Philipp Grabensee

Preparation, people, and policy as crucial factors

Importance of enforcing existing laws and building capacity

Unexpected Consensus

Caution against hasty creation of new laws

Philipp Grabensee

Ram Mohan

Caution against hastily creating new laws in response to cybercrime

Preparation, people, and policy as crucial factors

While most discussions focused on building capacity and implementing new measures, there was an unexpected consensus on the need for caution in creating new laws. Both speakers, from different perspectives (legal and technical), agreed that hasty creation of new laws or constant policy changes might not be the most effective approach to cybersecurity.

Overall Assessment

Summary

The main areas of agreement centered around the importance of capacity building, skills development, and international cooperation in addressing cybersecurity challenges, particularly for developing economies. There was also consensus on the need for strategic thinking and prioritization of resources.

Consensus level

There was a high level of consensus among the speakers on the fundamental challenges and approaches to cybersecurity in developing economies. This consensus suggests a clear direction for future efforts in this area, focusing on capacity building, international cooperation, and strategic resource allocation. The implications of this consensus are that international initiatives and policy-making bodies may find broad support for programs that address these agreed-upon priorities.

Differences

Different Viewpoints

Approach to cybersecurity capacity building

Wolfgang Kleinwachter

Christopher Painter

Developing countries should define their own cybersecurity needs

Importance of capacity building and technical assistance

While both speakers emphasize the importance of capacity building, Kleinwachter stresses the need for developing countries to define their own needs, while Painter focuses on the importance of external assistance and international cooperation.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the approach to capacity building and the role of international assistance versus self-reliance for developing countries in cybersecurity.

difference_level

The level of disagreement among the speakers is relatively low, with most differences being in emphasis rather than fundamental approach. This suggests a general consensus on the importance of capacity building and international cooperation in cybersecurity, particularly for developing economies.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of preparation and capacity building, but Mohan emphasizes policy development while Grabensee focuses on enforcing existing laws rather than creating new ones.

Ram Mohan

Philipp Grabensee

Preparation, people, and policy as crucial factors

Importance of enforcing existing laws and building capacity

Similar Viewpoints

Both speakers emphasized the importance of focusing on implementation and capacity building rather than creating new laws or frameworks. They argue that effective enforcement of existing measures is more critical than constantly developing new ones.

Ram Mohan

Philipp Grabensee

Preparation, people, and policy as crucial factors

Importance of enforcing existing laws and building capacity

Takeaways

Key Takeaways

Developing economies face significant cybersecurity challenges, including skills gaps, lack of resources, and inadequate preparation

Critical skills for cybersecurity in developing countries include strategic thinking, effective communication, and promoting collaboration

Preparation, people, and policy are crucial factors in cybersecurity readiness

International cooperation and trust between countries is essential for effective cybersecurity

Capacity building and technical assistance are vital for improving cybersecurity in developing economies

Developing countries should define their own cybersecurity needs rather than relying solely on models from developed nations

Implementing existing cybersecurity frameworks is often more effective than creating new laws or regulations

Resolutions and Action Items

Collate information from discussions like this and check its applicability with developing countries

Utilize existing resources like the Global Forum on Cyber Expertise (GFCE) framework and materials

Focus on practical, small steps in building cyber defense rather than overwhelming large-scale changes

Encourage developing nations to set up national CSIRTs (Computer Security Incident Response Teams)

Unresolved Issues

How to effectively address the cybersecurity skills gap in developing countries

Balancing the need for international cooperation with maintaining independence in cybersecurity strategies

How to ensure sustainable and continuous improvement in cybersecurity capabilities despite limited resources

Addressing language barriers in accessing cybersecurity information and participating in international forums

Suggested Compromises

Using hybrid or online formats for international meetings to increase participation from developing countries

Creating regional hubs for cybersecurity cooperation to make participation more accessible for smaller countries

Focusing on enforcing existing laws and building capacity rather than constantly creating new cybersecurity legislation

Balancing the adoption of international best practices with developing country-specific strategies that fit local contexts

Thought Provoking Comments

The very first thing that fails are the systems and the people who are unprepared. And it doesn’t matter if you have great resources, great knowledge, great education, but you will find, and this is true even in developed nations, but it’s especially true in developing countries, there is no preparation for it.

speaker

Ram Mohan

reason

This comment highlights the critical importance of preparation and readiness, beyond just having resources or knowledge. It challenges the assumption that simply having advanced technology or information is sufficient for cybersecurity.

impact

This shifted the discussion towards the practical aspects of cybersecurity implementation, especially in developing countries. It led to further exploration of the gaps between theoretical knowledge and practical readiness.

The best help is if you just provide resources which enable those countries to find their own way, because otherwise they are just a target on the export of models.

speaker

Wolfgang Kleinwächter

reason

This insight emphasizes the importance of empowering developing countries to create their own cybersecurity strategies rather than simply adopting models from other nations. It introduces a nuanced perspective on international cooperation and capacity building.

impact

This comment sparked a discussion about the balance between international assistance and local autonomy in cybersecurity. It led to considerations of how to provide support without imposing external models.

It’s not so much a frameworks, and it’s not so much a laws. It’s really about, you know, it comes down, so, you know, still it comes down to the people.

speaker

Philipp Grabensee

reason

This comment cuts through the focus on legal frameworks and technology to emphasize the human element in cybersecurity. It challenges the notion that solutions are primarily about laws or technical systems.

impact

This insight refocused the discussion on the importance of human capacity and training in cybersecurity efforts. It led to further exploration of how to address skills gaps and prepare people effectively.

Overall Assessment

These key comments shaped the discussion by shifting focus from theoretical frameworks and technological solutions to practical implementation challenges, especially in developing countries. They highlighted the importance of preparation, local autonomy in strategy development, and human capacity building. The conversation evolved from discussing broad international policies to exploring specific ways to empower and prepare individuals and institutions for cybersecurity challenges.

Follow-up Questions

How can developing economies find the time and resources to prepare information for sharing with colleagues?

speaker

Olga Cavalli

explanation

This is important because information sharing is crucial for cybersecurity, but developing economies often lack the time and resources to prepare and share information effectively.

How can developing countries build their own strategies for capacity building in AI and cybersecurity, rather than relying on models exported from other countries?

speaker

Wolfgang Kleinwächter

explanation

This is important to ensure that developing countries can address their specific needs and avoid becoming dependent on models from other countries that may not be suitable for their context.

How can we make international cybersecurity forums more accessible and relevant for developing world countries?

speaker

Christopher Painter

explanation

This is crucial because developing countries often struggle to participate in multiple international forums due to resource constraints, yet their participation is essential for global cybersecurity efforts.

How can we address the skills gap in law enforcement for dealing with cybercrime, particularly in developing countries?

speaker

Philipp Grabensee

explanation

This is important because effective law enforcement is crucial for combating cybercrime, but many countries lack the necessary trained personnel and resources.

How can we collate existing cybersecurity frameworks and resources to make them more accessible and applicable for developing economies?

speaker

Ram Mohan

explanation

This is important to avoid reinventing the wheel and to help developing economies implement effective cybersecurity measures based on existing, proven frameworks.

How can developing nations set up effective national CSIRTs (Computer Security Incident Response Teams)?

speaker

Merike Kaeo

explanation

This is critical for developing nations to build their cybersecurity capacity and respond effectively to cyber incidents.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Main Session | Policy Network on Artificial Intelligence

Main Session | Policy Network on Artificial Intelligence

Session at a Glance

Summary

This discussion focused on the outcomes of the IGF Policy Network on Artificial Intelligence’s year-long work, covering key areas of AI governance. The panel explored four main topics: liability in AI governance, environmental sustainability in the generative AI value chain, interoperability, and labor implications of AI. Speakers emphasized the need for a global governance framework for AI, highlighting challenges such as the digital divide, environmental impacts, and the potential for misuse in areas like peace and security.

Key points included the importance of accountability and transparency in AI systems, with suggestions for expanding liability to both producers and operators. The environmental impact of AI was discussed, particularly regarding resource extraction and energy consumption. Speakers stressed the need for interoperability while cautioning against potential exploitation of creators’ labor. The discussion touched on AI’s impact on the job market and the need for upskilling and reskilling workforces.

Panelists debated the feasibility of a global AI governance regime, acknowledging the challenges of multilateralism but emphasizing its necessity. The importance of common standards and definitions was highlighted, along with the need for both global cooperation and domestic policy implementation. Speakers also addressed the need for a clear definition of AI and the challenges of regulating a rapidly evolving technology.

The discussion concluded with calls for responsible collaboration, prioritizing sustainability in AI development, and the importance of considering AI’s impact on humanity as a whole. Participants emphasized the need for a comprehensive approach to AI governance that addresses technical, ethical, and societal implications.

Keypoints

Major discussion points:

– The need for global AI governance and cooperation, while balancing domestic policies

– Liability and accountability issues related to AI systems and their impacts

– Environmental and sustainability concerns around AI development and deployment

– Addressing inequality and capacity building to ensure equitable AI access and benefits

– Defining AI and its applications to enable effective governance

The overall purpose of the discussion was to examine key issues and recommendations around AI governance, based on a report by the IGF Policy Network on AI. The speakers aimed to explore how to implement responsible AI development and deployment on a global scale.

The tone of the discussion was largely serious and concerned, reflecting the gravity of the challenges posed by AI. However, there were also notes of cautious optimism about AI’s potential benefits if governed properly. The tone became more urgent towards the end as speakers emphasized the need for swift action on global AI governance.

Speakers

– Sorina Teleanu: Moderator

– Amrita Choudhury: Policy Network on AI coordinator

– Jimena Viveros: Managing Director and CEO of Equilibrium AI, member of the UN Secretary General’s high-level advisory body on AI

– Anita Gurumurthy: Executive Director of IT4Change

– Yves Iradukunda: Permanent Secretary, Ministry of ICT and Innovation of Rwanda

– Brando Benifei: Member of the European Parliament and co-rapporteur for the EU AI Act

– Meena Lysko: Founder and Director of Move Beyond Consulting and Co-Director of Merit Empower Her

– Muta Asguni: Assistant Deputy Minister for Digital Enablement, Ministry of Communication and Information Technology of Saudi Arabia

Additional speakers:

– Ansgar Kuhne: Affiliated with EUI

– Riyad Najm: From the media and communications sector

– Mohd: Online moderator

Full session report

Expanded Summary of IGF Policy Network on Artificial Intelligence Discussion

Introduction

This summary reports on a discussion focused on the outcomes of the IGF Policy Network on Artificial Intelligence’s year-long work, covering key areas of AI governance. The panel explored four main topics: liability in AI governance, environmental sustainability in the generative AI value chain, interoperability, and labour implications of AI. The discussion brought together experts from various sectors, including government officials, policymakers, and industry representatives, to address the complex challenges posed by AI development and deployment on a global scale.

Key Discussion Points

1. Liability and Accountability in AI Systems

The discussion emphasized the importance of establishing clear liability and accountability mechanisms for AI systems. Anita Gurumurthy, Executive Director of IT4Change, called for liability rules that encompass both producers and operators of AI systems. Ximena Viveros, Managing Director and CEO of Equilibrium AI and member of the UN Secretary General’s High-Level Advisory Body on AI, stressed the importance of state responsibility throughout the AI lifecycle, while also noting the challenge of allocating responsibility given the opacity of AI systems.

Brando Benifei, Member of the European Parliament and co-rapporteur for the EU AI Act, highlighted the need for transparency to address liability issues in the AI value chain. The speakers agreed on the importance of accountability and transparency in AI systems, emphasizing the need for explainability to ensure proper governance.

2. Environmental Sustainability in the AI Value Chain

The environmental impact of AI was a significant point of discussion. Meena Lysko, Founder and Director of Move Beyond Consulting, highlighted the environmental impacts of AI infrastructure and resource extraction. Muta Asguni, Assistant Deputy Minister for Digital Enablement, Ministry of Communication and Information Technology of Saudi Arabia, provided concrete data on the projected increase in electricity consumption by data centres, emphasizing the urgency of addressing AI’s environmental impact.

The discussion also touched on the potential of AI to support sustainable development goals, highlighting the complex nature of AI’s effects on society and the environment.

3. Interoperability and Global Cooperation

The importance of interoperability and global cooperation in AI development and governance was a recurring theme. Yves Iradukunda, Permanent Secretary at the Ministry of ICT and Innovation of Rwanda, stressed the importance of partnerships to bridge divides in AI development, particularly between developed and developing nations. Benifei emphasized the need for common standards and definitions for AI globally.

Lysko called for sincere collaboration on responsible AI development, while Asguni highlighted the challenge of regulatory arbitrage between countries. This underscored the need for a coordinated global approach to AI governance while recognizing the difficulties in achieving this given varying national interests and capabilities.

4. Labour Implications and Social Impact of AI

The social implications of AI, particularly its impact on labour markets, were discussed. Benifei stressed the need to consider AI’s impact on labour markets and the importance of upskilling and reskilling workforces. Iradukunda emphasized the importance of addressing inequalities in AI adoption and access, particularly in the Global South.

The discussion highlighted the need for a whole-of-government and whole-of-society approach to AI governance, as well as the importance of capacity building and awareness to ensure equitable AI development and deployment.

Additional Key Points

1. Global AI Governance and Regulation: The need for a global governance framework for AI emerged as a central theme, with speakers emphasizing the importance of international cooperation while acknowledging the challenges of multilateralism.

2. Defining AI: The challenge of defining AI for governance purposes was raised by audience member Riyad Najm, highlighting a fundamental issue in AI regulation efforts.

3. AI’s Impact on Peace and Security: Multiple speakers raised concerns about the potential misuse of AI in military applications and its implications for global peace and security.

4. AI Chatbot for Report Interaction: The creation of an AI chatbot to allow interaction with the report’s contents was noted as a practical tool for disseminating the findings.

Unresolved Issues and Future Directions

Several key issues remained unresolved, including:

1. How to achieve a binding global treaty or governance framework for AI

2. Balancing proactive and reactive approaches to AI regulation

3. Addressing regulatory arbitrage between countries, especially between the Global North and South

4. Defining AI in a way that allows for effective governance

5. Ensuring transparency and explainability of AI systems for accountability purposes

6. Protecting against misuse of AI, especially in military applications

Conclusion

The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreement on the need for global cooperation and comprehensive governance frameworks, differences in emphasis and approach underscored the difficulties in achieving a unified global strategy. The conversation emphasized the importance of considering AI’s impact on humanity as a whole, balancing its potential benefits with the need to mitigate risks and ensure equitable access and development.

As summed up by a quote from the UN Secretary General, shared by moderator Sorina Teleanu, “Digital technology must serve humanity, not the other way around.” This encapsulates the overarching goal of AI governance efforts discussed in this panel: to harness the potential of AI for sustainable development and societal benefit while addressing the significant challenges it poses to global governance, environmental sustainability, and social equity.

Session Transcript

Sorina Teleanu: Welcome to the main session of the IGF policy network on artificial intelligence. We have about one hour and 15 minutes to go through the main outcomes of a work that has been happening for about a year. And before I go and introduce our guests, I would like to invite you to join me in welcoming Amrita and tell us a bit about the work that has been going on for this year behind the policy network on artificial intelligence. Amrita?

Amrita Choudhury: Good afternoon, everyone, and thank you for coming. I agree with Sorina, the room is too large, the audience is too far away for all of us, but thank you for coming to this policy network on AI’s main session. Just to give you a background, the policy network on AI originated from the 2020 IGF, which was held in Addis Ababa, where people, the community thought that there should be a particular policy network which works on AI issues, especially related to governance, with a focus on global south. And so the first year, that is last year, we produced a report, and you can go and see it in the P&A website, and this is a multi-stakeholder group which actually decides what is going to be discussed, how it is going to happen, and how the report is formed. We have a few of the community members also sitting here, and this year, we had four subgroups, interoperability, sustainability, liability, and labour-related issues. Some of the community members who have been very active, of course, all community members have worked in their capacities, and they are all volunteers, but some names which I would like to mention is Caroline, Shamira, Ashraf, who is also the online moderator, Yik Chang Chin, Olga, Shafiq, and Herman, Olga Kavali, they were, you know, great leaders of the various subgroups. We also thank all the members, volunteers, proofreaders, and our consultant, Mikey, who is working behind the screen, and I think she’s sitting there, and our MAG coordinator, who is also sitting there for all the hard work which has been put in. If you want to see the report, it is there online, and if you are in the Zoom room, it would be put into the chat, and I think Sorina also has something planned. With that, I will pass it on to Sorina and our panellists.

Sorina Teleanu: Thank you so much, Amrita. I’m going to try to get closer to you as well, because that feels a bit odd, and the light is exactly on me. So we heard a bit about the work of the Policy Network on Artificial Intelligence, and I’m sure you have heard lots of talks about AI these days. We are deploying, in fact, reporting, and you can probably also guess it, the main word over the past two days at the IGF has been, obviously, AI. So it’s obviously talked about quite a lot, and it is in this context that we will be trying to unpack some of the discussions around artificial intelligence, more specifically around AI governance with our esteemed guests that I’m going to introduce briefly. But before that, let me tell you also a few words about the report. Amrita mentioned it’s available online, and it is the result of a one-year long process, so I do kindly encourage you to take a look at this, at the summary, executive summary. So the report covers four main areas. One is liability as a policy level in AI governance. The other is environmental sustainability within the generative AI value chain. Then the third area is around interoperability, legal, technical, and data-related. And the final area covered in the report is on labor implications of AI. So now I’m going to ask the obvious question. Has anyone here even tried to open the report before joining this session? Am I seeing a hand? Oh, I’m seeing a few hands. Excellent. Thank you for doing that. I also have some, I hope, good news for you here and also for our colleagues who have been working so long on this report. Our attention spans, you know, it’s kind of limited these days, and reading a 100-something page report might not be the first thing we want to do. But I have a gift for you, and that’s an AI assistant. We talk about AI. Let’s also walk the talk a bit. So what my colleagues at Diplo Foundation have been doing is to build an AI assistant based solely on the report. So you can go online and actually interact with the AI assistant and ask questions about the report and its recommendations. We’re going to share the link, and you can access it during and after the session as well. So, and I’m pretty sure colleagues who have been working on the report would be looking forward to hearing your feedback as well on what is written there. Let me turn back to our guests. I’ll introduce them briefly. And then the plan for the session is to hear a bit from them about the four main areas of the report. Also hear from them about how they see the recommendations of the report and where do they think this recommendation could be going moving forward so they actually have an impact in the real world. And then we do hope to have a dialogue. Although this room might not be very inviting for the kind of dialogue we’re hoping, I will be looking at you, and I hope there will be a few raised hands in the room. So let me do what I have been promising for quite a while. In no particular order, we have Jimena Viveros Thank you, Jimena. Managing Director and CEO of Equilibrium AI, and also member of the UN Secretary General’s high-level advisory body on AI, which produced another excellent report that I do encourage you to take a look at if you haven’t yet. Then we have online with us Mina Lisco. Thank you, Mina, for joining us. Founder and Director of Move Beyond Consulting and Co-Director of Merit Empower Her. Also online, Anita Gurumurthy, Executive Director of IT4Change. Thank you, Anita, for joining. Back to the room, we have Yves Iradukunda, Permanent Secretary, Ministry of ICT and Innovation of Rwanda. Thank you for joining. Brando Benifei, Member of the European Parliament and co-rapporteur for probably the most famous piece of legislation on AI at the moment, the EU AI Act. And Muta Asguni, Assistant Deputy Minister for Digital Enablement, Ministry of Communication and Information Technology of Saudi Arabia. Thank you so much for hosting us this year. And we also have an online moderator for our participants online. Mocht will be giving us feedback and input from the online room. So again, in no particular order, I’m going to invite our guests to reflect on a section of the report and try to look also at the recommendations, if possible, and tell us how you see these recommendations moving forward. And I’m going to do an absolutely random pick. Anita, would you like to start?

Anita Gurumurthy: Sure, I can do that. Am I audible? Okay. Thank you. I just wanted to commend the report, and especially for the four focus areas. Those really come with a lot of insights and also reflect the state of art analysis, especially on crucial but often neglected areas of environment and labor. Also, I think it takes up two very, very difficult areas. One is the whole idea of liability, and the other is the idea of interoperability. I’ll focus on these two, because I’d like to really zoom in on what I think we should be looking at in this domain. What would be interesting and useful is for the report to enlarge its remit in terms of liability rules, which should apply to both producers and operators of systems. Because a fairly invested level of care is needed in designing, testing, and employing AI-based solutions. And we need to understand that while producers control the product’s safety features, and producers look at how interfaces between the product and its operator can really be improved, let’s take the whole context of social welfare systems or the government employing systems. So in that case, the operator of the system also is implicated in decision making around the circumstances in which systems will be put to use. And these are real world situations, and here I think it’s really important that operators also become liable and bear some of the associated costs when risks become actual harms. So that’s one thing. The second is a particular thought that I have around the training that we do of the judiciary, the training that is needed for lawmakers, policy makers, et cetera. Here I think that the elephant in the room cannot be disregarded, and that is really the whole absence of global space to make certain decisions. We are particularly concerned not only about the opacity of algorithms, but the fact that such opacity often in cross-border value chains, you know, in trade, in services, for instance, get compounded because of trade secret protections. So trade secret claims over the technical details of AI can really become an obfuscating mechanism and limit the disclosure of such information systems. I want to draw your attention to a recent paper from CIGI in Canada where a landmark case about Lyft and Uber came up to the court and the Washington Supreme Court ruled that reports in question maintained as trade secrets by Lyft and Uber qualify actually as public records. And in the public interest, they have to really be put out. So we have to look at this very carefully. The other thing I want to say is that the recommendations in the environment section could also look at very useful concepts coming from international environmental law, you know, the Biodiversity Convention, common but differentiated responsibilities because the financing that is needed for AI infrastructures will require us to adopt a gradient approach. Some countries are already powerful and some not. So that’s very important. I’d also like to just focus a little bit, maybe one minute or one and a half minutes on the vital distinction between interoperability that’s a technical idea and interoperability that’s a legal idea. I think if you look at interoperability, sometimes while calling for this important principle, you know, it’s like openness. We have to be careful about who we are making something open for or whether there is public interest underlying such openness. So interoperability often enables systemic exploitation of creators labor, right? So oftentimes, if we don’t have guardrails, the largest firms tend to cannibalize innovation. So I would like to actually conclude by saying that we should really look at technical interoperability and policy sovereignty as not, you know, things that are polarized, but we should work towards a framework in which many countries can participate in global AI standards. My last comment would be a fleeting remark about the wonderful chapter on labor, which could perhaps do with one addition about the idea of cross-border supply chains in which labor is implicated in the global south. And the fact that while guaranteeing labor rights, we really need to understand that working conditions in the global south include subcontracting and therefore transnational corporations must also be responsible in some way when they outsource, you know, in labor chains, AI labor chains to third parties or subcontracts. So that we are actually looking at responsibility in the truest sense of the term. I’ll stop here, thank you.

Sorina Teleanu: Thank you, Anita. We’re already adding more keywords to the ones that we already have in the four main sections of the report. And I have two on my list right now, that’s transparency and responsibility. I’ll be adding more during the discussions and at the end, we’ll see what were the keywords in this debate. So we’re moving from the global south to the global north. And I’m going to invite Brando to provide his reflections, also because they relate to what Anita has been talking about interoperability and cross-border supply chains. So, Brando, you have the floor.

Brando Benifei: Thank you. Thank you very much. First of all, I’m really happy to be able to talk in this very important panel because clearly on AI, we need to build global cooperation, global governance, and we need to examine together what are the challenges. And in fact, the impact on labor and opportunities of having an interoperable technological development around AI are some of those challenges. In fact, I think that the fact that we have chosen, which was a debated choice, it was not obvious, we have chosen to identify the use of artificial intelligence in the workplace as one of the sensitive use cases that in the AI Act is regulated to try to build safety and safeguards for workers, for those that are impacted by AI in the labor place, in the place where they work, is one important direction. And also, at a larger policy point of view, clearly the impact of AI on the labor market is already very impactful. So we need to build common strategies to manage the change in how the workforce will be composed. In fact, I think we could compare the AI with the impact that is already showing, consider that we are only two years into the generative AI revolution, to some extent, we can call it like that, it’s only two years when it reached the general public. And we will see what will happen in a short time after. So the impact is already strong. We need to consider the change that is happening, like when electricity was introduced. Sometimes I hear it’s like with internet. No, because internet is not as pervasive as AI can be. AI can change every workplace, every dynamic of labor. And it’s like the invention of electricity. It’s like the use of the steam in the development of pre-industrial automatic processes. We can look at that with that eye, I would say. And then that’s why we need global governance. We need rules because the impact on our societies is in fact even larger, not just on labor. But obviously, I say that as one who negotiated a regulation that dealt with market rules, we need to build a set of policies which are fiscal policies which are budgetary policies, permanent lifelong learning policies that are able to deal with these changes. And I really believe that we need to build common standards, common definitions. We are working on that in various international fora so that we can have more interoperability. In fact, you know that the EU has been leading on pushing against those that limit interoperability. In fact, one other legislative act of the EU, the Digital Markets Act, is also targeted at increasing interoperability. And we think that this is crucial if we want our different parts of the world to work together and to find solutions between our different businesses that can have our AI cooperate together, work together, not be in different silos and be separated. I don’t think this would be good for our economies, but also for the global understanding. We need AI to be also respectful of different traditions, different histories. I say that because we risk instead, because of the dynamic of how the training of AI happens to have a very limited cultural scope. And I say that from Europe. So it could be even more applied to other parts of the world, I would say. So I think this is something that are, I mean, these are some of the challenges we are in. I strongly believe that. that we need to combine, and I conclude on this point, two different efforts. On one way, domestic policy, in the sense that we need to have our own rules on how we deal with AI entering into our society. And there can be different models, there will be different models, but we can build some common understanding. For example, and this applies again also to the labor topic, we have built some common language, also looking at the work of the UN on the issue of the risk categorization. The idea of finding different levels of risk attached to different ways of using AI as a common way of looking at how we use AI. And on the other end, I think we also need to concentrate on where we need to work on a supranational level, because there are issues where we cannot find solutions without working over the borders. I mentioned one thing that is outside the two topics of labor and liability, but I think it’s especially important to mention it. To conclude, it’s the issue of the security and military use of AI. I think it’s very important that we work on that, because all the other actions are not effective if we are not able to control AI used as a form of a weapon or a form of security in all its implications. So I think these are some of my reflections on the topic. Thank you very much.

Sorina Teleanu: Thank you also for covering quite many topics. The good news on your final point about the discussions on the security and military implications of AI is that there is a debate at the UN General Assembly on a potential resolution for that. So for anyone in the room who belongs to a government, do encourage your Ministry of Foreign Affairs to be part of this discussion, because as Brando was saying, it is important to have this as some sort of a universal agreement at the UN level. On the interplay between global governance and rules and domestic policies, I hope we can get back to that a little later in the session and I’m hoping to also hear reflections from the room because that’s a very important point. Okay, if we agree on something at an international level, what next and how can we implement those policies locally at the national and regional level as well? And I also liked your point about common standards and definitions. It’s not easy to agree on these things at regional level. At an international level, it’s a bit more complex as well, but it would help when we discuss, again, the interoperability, liability issues and all these things that have been raised so far. Let me move on to Jimena because she’ll also speak about liability.

Jimena Viveros: Hello, thank you very much. It’s a pleasure to be here with all of these distinguished speakers and the audience. So, first of all, I would like to highlight what Brando was saying before about peace and security because I think that is key. And as a commissioner for RE-AIM, which is the global commission for the responsible use of AI in the military domain, we like to expand this into the broader peace and security domains because the implications of AI, obviously we know it in the civilian space, in all types of different forms, but in the peace and security domains, it’s not just limited to the military. So we can see that in civilian actors that are state actors, such as law enforcement and border controls, and we can also see it in non-state actors that are also civilian, which can range from terrorism, organized crime, mercenary groups, and just rogue actors. So it’s very important to also look at it from all of these dimensions because they do have a very destabilizing effect internationally, regionally, and at every type of level because of the scalability and the easy access, the proliferation of it all. So that’s why accountability and liability is so important. So the report is great, and it really tackles a lot of the good topics about liability. However, the report only focuses on liability in terms of administrative, civil, and or product liability. It was a deliberate choice to exclude the criminal responsibility, but I would also go a little bit further also and say that we need to look at state responsibility as well for the production, the deployment, the use in the entire life cycle, basically, of any of these AI systems. I think it’s very accurate that this liability part is the first section of this report because it’s extremely important. Why? Because we need to, in the current landscape that we’re living in, where international law is pretty much blatantly violated with complete impunity all the time, talking about accountability seems like fairy tales, but it’s really important to uphold the rule of law, to rebuild the trust in the international system, which is at a critical moment right now. Also for the protection of all types of human rights for all types of people, especially those in the Global South. I am Mexican, and the Global South, we are disproportionately affected by these technologies, both by the digital divide, but by the deployment of it, and the fact that we are basically consumers, not developers, also influences greatly how we are affected by the technology. It also matters because there is a deterrent effect when we’re talking about accountability in the criminal domain, especially, that really deters, and this deterrent effect helps promote safe, ethical, and responsible use, development, and deployment of AI. And it also allows for remedies for harm. These mechanisms are very important, and should be included

Brando Benifei: in every type of accountability framework, because we do have a lot of problems that stem from AI in terms of liability, accountability, which I prefer the term accountability because it’s more encompassing. So we have the atomization of responsibility. Obviously, there’s so many actors that are involved throughout the entire life chain of these technologies, and both enterprises, people, and also states as a whole. That’s why I involved the state responsibility. I identify three categories, so the users, the creators, and the authorizers, and everyone, but they’re not mutually exclusive. Each type of responsibility, it can be allocated on its own, and should be allocated on its own. Obviously, what was mentioned, the opacity, the black box, it also affects the proper allocation of responsibilities to each one of the actors, and the fragmentation of governance regimes, because what we’re witnessing now is just kind of forum shopping to whichever jurisdiction is more amenable to your purposes, and that’s where you set up, or that’s where you operate, and so on. So that’s why a global governance regime is extremely important, because these technologies are transboundary, as has already been said. So having a patchwork of initiatives is completely insufficient, and also the regimes that we have right now for reporting, for monitoring, and verifying everything that could eventually lead to some type of accountability, they’re all based on voluntarism, and in my opinion, that’s absolutely insufficient. It’s ineffective. At the OECD, we have this monitoring of incidents and framework where it’s obviously based on self-reporting, and we have witnessed there that the lack of transparency and the lack of accuracy in this type of systems of voluntarism is just not gonna work, and it’s absolutely unsustainable. Also, the type of self-regulation that is being used, or self-imposed by the industry sector, is also not gonna work if we don’t have actual enforcement mechanisms,

Jimena Viveros: so in a centralized authority to do so, because if we go, again, state by state, it’s really not gonna be very efficient. So I think we all have the general notion of what accountability is, and what it means, and why it matters. We just need to find the solutions, and the willingness to do so, because everyone should be accountable throughout the entirety of the world. the entire life cycle of AI. And I’ll leave it here, but I’m happy to expand on some issues later. Thank you.

Sorina Teleanu: Thank you, Jimena. I think we’re already collecting suggestions for the police network to continue working on these issues next year, and I’m taking notes of some of the areas that could be in focus. You mentioned the impact of AI on peace and security broader and going beyond the military domain. This notion of state responsibility and liability, and then the fragmentation of AI governance. And I’m going to put a question out there that I hope we can explore a little later with everyone in the room as well. The idea of a global governance regime. Is that feasible? How feasible actually it is? And what can be done concretely to get there? We all know that the appetite for multilateralism these days is not as much as we might want it, but maybe it’s not all lost. All right, let me continue with our speakers, and I’m going to invite Yves to continue, please.

Yves Iradukunda : Thank you, and good afternoon. It’s great to be here in this critical conversation, and thanks to the Internet Governance Forum for inviting us, and particularly commendable work that the Policy Network on AI has done, and the report that offers really good recommendations that should, if implemented, if guiding our engagements going forward, should make a significant impact. This conversation is very critical, and as I hear my fellow panelists share their reflection on the report, but also insight on their respective context and work. It challenged me to think about, we can’t talk about responsible AI as AI isolated from everything else that we do in our lives. I think when you think about AI as a technology, we also need to reflect about why AI to begin with, why technology to begin with, and what has been the impact of technology all along, before even the AI came in. I think if we reflect on that, then AI is just not a new concept from a perspective of its impact on our lives on a day-to-day. I say this because technology has been able to help advance innovation, solve different challenges, and help tackle some of the issues that we have. But at the same time, technology has driven some of the inequity issues. I think as we reflect particularly on AI today, we also need to really acknowledge the fact that if the foundational values of why we do technology are not revisited, it’s not just about AI, it’s about the values of our society altogether. But since we are focusing on AI, just allow me to reflect. From a perspective of Rwanda, we always ask ourselves to what extent, to what end, what is the end goal? And we focus primarily on the impact we want to have on our citizens. And whether it’s AI or any other emerging technology, we really want to see it as a tool, a tool that we use to improve the lives of our citizens, whether it’s in healthcare, education, and agriculture, where we are really prioritizing our investments to use AI leveraging, addressing the gaps that we have. And so what we’re seeing as an outcome is really leveraging technology investments, and also early success of AI to bridge the gaps we see in equity and inclusion, but most importantly, improving the lives of our citizens. So the themes for today’s discussion, whether it’s focusing on interoperability in governance, looking at environmental sustainability, or the issues around accountability, and the impact AI is having on labor, all of this can be addressed if we again zero in on the impact we want to have on our society. I think unlocking AI’s full potential, I would agree with what has been said before. All these values and ethical guidelines and the principles that guide how it’s implemented have to really be guided. There has to be consensus and dialogue on how we deploy the different solutions ethically. And I think it was said earlier on, the responsible approaches have to really understand the different players that are accessing the AI tools. And the standards should follow those values and should protect against ill-intention of the use of AI. So when I look at the report and the different recommendations, I find confidence in this global community within the policy network. But again, for this session, I really want to call upon the leaders in the room, the technology specialists, and corporate companies that are deploying these tools to really follow these recommendations, but most importantly, figure out what is it that we want to do for our society. And so whether it’s coming to building capacity, I think it’s something that we need to double down on. Right now, there is inequity in terms of how different countries are adopting AI. The talent is probably available in all countries, but in terms of access to the tools, in terms of awareness, there is a big disparity. So I think even as we speak, most of the people across the globe may have limited understanding and appreciation of the impact AI is going to have on their lives. So I think building capacity should start with awareness, and then the deployment of the AI tools should really be focused on improving people’s lives at all levels, whether it’s the highest advancement in security, as it was just said, whether it’s in medicine and other applications. We also need to think about how does it affect farmers in their respective society in different levels. I think we should foster partnership as we follow the implementations from this report. Like I said, it’s not just for governments or corporates alone or international organizations. I think we need to really bring partnerships forward to make sure that we bridge the divide and accelerate innovation across all levels. And finally, I think we should really commit to the adoption of these policies within our respective jurisdictions. I think the boundaries of the impact of AI are limitless. So even if you look at environmental impact of AI, you will not know boundaries. So the innovation around manufacturing of equipment that is used for AI solution, looking at energy solutions that are renewable and also limiting applications of AI that are really against environmental approaches should also be limited. So to conclude, I think it’s really commendable working on the report, the recommendations. I think a lot of insights have been already shared here on the panel, but really a call on all the leaders here present to really put at the center the impact on the people, on the citizen, respectively, and really think about how does UI serve that purpose of improving their lives.

Sorina Teleanu: Thank you so much, Yves. Also for bringing the focus back to issues on inequality, access, capacity building, how do we bridge the divides that we see actually growing instead of shrinking. And I pretty much like your question, to what end? Where are we going with this, whatever we call it, technological progress? And while listening to you, I was just reminded of two quotes we came across the other day while we were going through the discussions of the many sessions. And I just want to read them quickly, and again, hoping we can reflect on them a little earlier. Again, building on your point about, okay, to what end? One came from the Secretary General of the UN, who’s actually convening this forum, and it was very simple but also very powerful. Digital technology must serve humanity, not the other way around. We might want to think about this a bit more as we develop and deploy AI technologies. And the second one is a bit more elaborated, but along the same lines. Are we sure that the AI revolution will be progress? Not just innovation, not just power, but progress for humankind. I’m hoping we can also have a bit more reflection on this here, but also beyond in our broader debates on AI governance. I’m going to move online and invite Mina to provide her intervention. Mina, over to you.

Meena Lysko: Thank you. Thank you very much, Serena. Maybe I could start with first thanking the Internet Governance Forum’s policy network on Artificial Intelligence for actually organizing this very important discussion and for inviting me to be part of it. I appreciate the chapter on environment sustainability and generative AI. I’d like to maybe first paint a vivid picture where this picture as well as other scenarios, I am of firm belief, have actually been the premise for the Internet Governance Policy Network on AI. So, as it stands, the global south is indiscriminately impacted by generative AI and its associated technologies. The global north economies are strengthened largely by providing technologically advanced technologies and technologically advanced solutions which are taken up worldwide and at the same time, the global north have the resource and time to implement and enforce policies which will protect the local environments. The entire day is not necessarily spent on hard and hazardous labor, just to get food into the mouths of the poor. This may not be the same with poorer and developing countries. Just as with our plastic pollution, we will see greater disparities in impact of non-green industries on the environments of the most vulnerable. To illustrate this view, I will use the example of generative AI. The automotive industry is being transformed by the integration of electric vehicles, software-defined vehicles, smart factories, and generative AI. Identifying red flags related to environmental harm across the entire value chain of electric vehicles is crucial to sustainable development. So, permit me, key red flags are the biodiversity loss from mining raw materials. Generative AI relies on large-scale data centers, GPUs, and other computational hardware, as well as all of us with, for example, our smart phones, all of which require metals and minerals like lithium, cobalt, nickel, rare earth metals, and copper. Extracting these materials impacts local ecosystems, wildlife, and the broader environment. Let’s look at this from the perspective of deforestation and habitat destruction. Cobalt mined in the forests of the global south country, the Democratic Republic of Congo. Cobalt is a chemical element used to produce lithium-ion batteries, for example. The country has seen genocide and exploitive work practices. The cutting down of millions of trees, in turn negatively impacting air quality around mines. More so, cobalt is toxic. The expanded mining operations result in people being forced from their homes and farmland. According to a 2023 report, forests, highlands, and lakeshores of the eastern DRC are guarded by armed militias that enslave hundreds of thousands of men, women, and children. The destruction of forests due to cobalt mining reduces the earth’s natural carbon footprint. Cobalt mining reduces the earth’s natural carbon sinks, which are crucial for mitigating climate change. Let’s also be reminded of the negative impacts of copper and nickel mined in the Amazon rainforests. The escalating nickel extraction in Indonesia, as well as lithium mined in Chile’s Atacama Desert. So, besides the biodiversity lost from mining raw materials, we can also look at and explore water pollution from material extraction, processing, and battery disposal. The carbon footprint from energy intensive production, assembly, and charging. Waste generation at every stage, including battery disposal and component manufacturing. The social and ethical issues like child labor and mining, hazardous working conditions, and green washing. Addressing these red flags requires stricter regulation, sustainable sourcing, clean energy use, and investments in circular economy practices. We need to be extra mindful of the impact of batteries on the environment in the longer term. We are presently having to manage disposal of electronic waste, including the plastic. These impregnate our vital land and waters, but still at a micro and nano level. If we fast forward a few decades from now, the battery waste bodes to be far more unmanageable. As we are now looking at seepage of fluids into our ecosystems. So, the policy network on artificial intelligence policy brief report provides seven multi-stakeholder recommendations for policy action. I’d like to emphasize that in developing a comprehensive sustainability metric for generative AI, that’s recommendation one, the standardized metrics must have leeway to adapt. To take into consideration our rapidly evolving digital space. Today, we are having to look at the repercussions of elements such as cobalt, nickel, and lithium. We are having to consider greener technologies to meet a nominal energy demand relating to generative AI. A decade, or even a few years from now, our targets will likely be completely different. Also, if I can add one more, I suggest that we have, in addition to the seven recommendations, an outlook into the impact of the environment. Because we have moved beyond just terrestrial. We are mining outer space. So, the global space race for mining resources to quench our generative AI thirst needs also consideration. I’d like to pause there for now. Thank you very much. Thank you also, Mina. Thank you for making us think of more right in front of us issues that sometimes we tend not to see just because they’re right in front of us. Thank you for raising more awareness about the use and misuse of natural resources here on earth, but also in outer space. That’s not something we talk so much in AI governance

Sorina Teleanu: discussions. But it is a very important point also because we don’t necessarily have a global framework for the exploitation of space resources. And it would probably be better to start thinking about that sooner rather than later. Because as Mina was saying, we do see a lot of competition for the use of resources for the development of AI and other technologies. So, thank you so much for bringing that up also. Moving on to Mutas for your reflections, please.

Muta Asguni: Thank you so much, Serena. Really happy to be here with you guys on this session at IGF. I think there is a lot of ambiguity and uncertainty, as you mentioned, Serena, surrounding AI. This is not just the talk of the hour and not just the talk of the hour in Saudi. This is the talk of the minute and second everywhere in the world. And before I talk about the paper and the report, I want to take a step back and look back at history. Because history is not just the greatest teacher. It’s also the greatest predictor of the future. We as human beings, as global society, as a united nation, we’ve been here before. Four times. We’ve been here before in the first industrial revolution with the transition from agriculture to industrialization. Then again with the introduction of electricity. And again with the introduction of computers. And then on the fourth industrial revolution with the introduction of the Internet. And finally now, we’re on the cusp of the fifth industrial revolution with the transition from the digital age to the intelligence age. Each one of the previous four industrial revolutions had profound impact on three specific aspects. On infrastructure. On society, mainly on labor. And on policy. Let’s take electricity as an example. As Brandy just mentioned. When electricity was introduced, we had to develop a lot of new infrastructure to deliver electricity to every home, to give a chance to everyone to be able to harness its power and use in a safe and robust manner. When we talk about electricity and its impact on the society and jobs, the jobs market was never the same before and after electricity. It changed forever and we’ve adopted, adapted and prospered together with electricity. We’ve up-skilled and re-skilled our economies and our people to be able to leverage that technology for the greater good. In terms of policy, we’ve developed standards, we’ve developed frameworks. We as an international community came together in order to build a robust and meaningful framework that we can all work together on for the greater good in the use of electricity. AI is not going to be different. If we look at the same three lenses in the AI perspective, let’s take infrastructure for an example. Today, we’re using about 7 gigawatts of electrical power in data centers in the world today. This is projected to grow to 63 gigawatts by 2030. In just five years, we’re expected to grow and consume 10 times the electricity that we consume today for the use of data centers. This will have a profound impact on the environment. But the good news is, 30% of the 7 gigawatts that we use today is actually, and this is a funny anecdote, is actually being used to predict the weather. So if we can actually, and it’s using very old technologies and machine learning technologies in order to predict the weather, just to predict 7 days of weather. Now we can actually use generative AI to predict not just 7 days, 12 days in much less power and reutilize that excess power for new uses of AI. Now in terms of society, yes, AI is going to have a profound impact on jobs. Jobs are, again, not going to be the same before and after we fully adopt AI. But we as a global society, we need to come back again, upskill and reskill our economies in order to adopt, adapt, and prosper together with AI. Finally, in terms of policy, this is the main topic of discussion in this session. Like every technology, there are two aspects when it comes to policy for AI. And this is true for every technology, there is a local aspect and a global aspect. In terms of the local aspect, we can look at the collection, the use, the utilization and the access of data and AI technologies within these specific geographies according to the local priorities and agendas. In the global aspect, which we’ve already done amazing work with the establishment of the report, we actually need to work as global bodies with local government, with the private sector and the public sector. And the good news is everyone is willing to actually put their hands together and leverage whatever we have today for the good of humanity and to ease the adoption of AI. And with that, I look forward to the rest of the session and the discussion. Thank you so much.

Sorina Teleanu: For the good of humanity is a very good way to end this section of the discussion. I did promise we’ll have a dialogue and we only have 19 more minutes of this session. So I’m going to try to do that. I look in the room and I also count on my colleague online to tell us what’s happening there. Any hands, anyone would like to… Do you have a mic there or how does it work? Okay, I’ll come to you. Probably easier. Let’s try. Please also introduce yourself. Okay.

Audience: So thank you so much. Give me the opportunity to ask some questions. So at first, I’d like to congratulate your hard work to release this report. I know that’s very hard work to address such complicated issues. And I’m eager to read this report. But back to the main theme, AI governance we want, I want to ask a fundamental question about what is the overarch goal for the AI governance? Is it acceptable to use the title of the very first United Nations resolution adopted by the General Assembly in March, the title is Seizing the Opportunities of Safe, Secure, Trustworthy AI Systems for Sustainable Development? If not, what is the better articulation for the overarch goal of AI governance? So that’s my first question. And my second question is that I believe governance is beyond regulation. Governance dealing with technical innovations, because we do need technical information, but we also need governance to guide these innovations for the great good of the people and the globe, the planet. So if we use that for sustainable development is the overarch goal of the AI governance, how can we guide the AI innovation in line with the sustainable development goals, and even to accelerate the implementation of sustainable development goals? And the third question is back to the regulation. The common concern for AI application is that about disinformation. But the disinformation is from the mild use of using AI tools. So take traffic safety as example. For traffic safety we need safe cars, we need safe roads, but more importantly we need people, the drivers, obeyed by the rules. So how can we have a comprehensive governance framework to regulate the behavior of the AI users? I stop here. Thank you so much.

Sorina Teleanu: Thank you also. I think we’ll try to get a few more questions and then provide reflections. Any more points from the room? I don’t see any hand. And we covered quite many topics, so I’m pretty sure you do have at least a small reflection in mind, thinking about all of them. I’m seeing a hand there. Could you please come? There are only mics here, unfortunately. Meanwhile, I do like your question. What do we want from AI governance and what is the AI governance we want? And while we’re waiting, Ximena, you want to provide some reflections?

Brando Benifei: Yes. So obviously there have been around four important resolutions this year regarding AI. One was promoted by China, the one you mentioned, which is fantastic. So all of these resolutions are steps forward. And they are also leading up to the global governance that we want and that we expect. That’s why we had the Summit of the Future this September, and we had the Global Digital Compact, and we had the Pact for the Future. And all of the documents that were adopted therein are a monumental step because we are now guiding the path of where AI is going to be governed by and for humanity. So that’s actually the title of the Secretary General’s High Advisory Body report, Governing AI for Humanity. And I’d like to say also for the benefit and protection of humanity, because as you mentioned, AI has enormous potential and can be harnessed for good, can be harnessed for all types of enhancement of all of the sustainable goals. However, as Amina Mohammed said in the Arab Forum this past March in Beirut, there can be no sustainable development without peace. So going back to the point of peace and security and the importance of AI and the dual-use nature of it, what we want to create is global governance that encompasses both of this dual-use nature, repurposability, all of that. And it’s important to have it because, again, if not, we’re just going to have fragmented approaches that are not interoperable, that are not correlated or cooperated. So we all need to work towards this. And the only way we can do it is by adoption of a binding treaty. So that’s going to be hard, but we need to be ambitious in order to have this technology governed by us, not that we eventually get surpassed by it.

Audience: Hi, my name is Ansgar Kuhne, I’m with EUI. Of course, I’d like to congratulate the Policy Network on AI for this very important report. I’d like to invite the panel to reflect on the interaction between the liability and interoperability aspect, specifically if we have interoperating AI systems, how best to identify where liability would lie in case issues do arise. Is there a role for contractual agreements in this? And if so, how to deal with the imbalances in both informational and economic power that various actors within that network of interoperating players may have? Thank you.

Sorina Teleanu: Thank you as well. Do we have more hands in the room? Yes, we do. Try the mic over there. If not, I’ll come your way. Not so much. Okay, it’s going to take me a while. If anyone would like to provide any reflection while I do the walk, please go ahead. Thank you.

Audience: Thank you. Riyad Najm. I’m from the media and communications sector. Now, in order for us to govern something, don’t we have to define it first? I mean, we all talk about artificial intelligence and what it is and what it can do good or bad to us. But until now, I cannot see a correct and definite definition for AI. By this definition, does it mean the speed that we can execute our computation or is it the access of data that we can reach and manipulate at the same time? By doing that, we all know that artificial intelligence was established a long time ago. The only thing that is becoming relevant now is because we are able to access data at the same time with a great amount and we have excessive and high speed of computation. So we need to define it first before we try to govern it. And maybe my other comment, if for the past maybe almost 20 years, we have not been able to govern the internet itself on a global level. All what we get are like sometimes guidelines, some initiatives and so on. And there was never a treaty that can cover this. Are we able to do that for artificial intelligence? I leave that to the panel to answer. Thank you.

Sorina Teleanu: Thank you as well for taking us many, many steps back and asking the questions of what exactly do we talk about when we talk about AI. I’m going to turn online and see if we have any questions and reflections from there briefly. From our online participants, not our online speakers. If our online moderator can just go ahead and unmute.

Audience: Okay, thank you very much. But actually we have a problem with our audio. So I’m not quite sure whether you can hear us well or not. Please go ahead, we can hear you well. Okay, there are a few questions actually from the online audience. The first one, I think this is a repetition of the last question, more or less the same. So how do we address the regulatory arbitrage between the countries, especially between the global north and the global south? Because the situation is very difficult and very different. Even for the internet, for the past 20 years, it is hard for us to regulate it, right? So how do we arbitrage this issue? And the second question is, we have a problem of wrong usage of AI. So the wrong usage of AI is worse in the case of military applications. So how do we safeguard ourselves to ensure that our AI is not being wrongly used for military purposes? And then we have an interesting question from Omar. The third question is, okay, AI is producing a lot of harms actually in terms of the online interactions. So there are several bullying cases, there are a lot of things that are being generated falsely by generative AI. So how do we protect ourselves, especially for the young generations? So any panelists that hopefully can answer these questions? And then maybe how to foster the collaborations when we encounter this problem? So I think that’s four questions is enough because we have like seven minutes more. Thank you very much.

Sorina Teleanu: Thank you also, Mohd, sorry, and to everyone else online and here. We have seven minutes to answer quite many questions and I’m going to turn to you. Please go ahead.

Brando Benifei: Yes, well, a lot of different things have been asked, I tried to answer a few. In fact, on the issue of the definition that was touched, it was a big issue for us too. In the end, I think it’s very important that we concentrate as much as possible on defining the concrete applications of AI so that we define the systems, we define what we want to regulate for the regulation sake, because we are not talking about philosophy or other sciences that should analyze AI in different aspects. So we have been working on that as EU and there are important processes ongoing at UN, at OECD, and I think we need to stick to the minimum that we can so that we can find more agreement. Otherwise, we will lose sight. On the sustainable goals issue that was mentioned, I think it’s important to also mention the risk of excessive wealth concentration that will limit the access to services, to the same issues we mentioned of permanent learning, etc. So in fact there is also an issue of how we distribute the added value created by AI increase of productivity. It’s something that if we look at the policy side, it cannot be avoided. I think we need to bring that on the table too. So it’s a fiscal policy, it’s a budgetary policy, it’s a new welfare system because in fact with the revolutions we talked about, with the industrial revolution, electricity, the digital space, we have seen changes in how we organized our safety nets and our state support systems. So we need to work also on that. And finally on the liability, I want to say that it’s very important that we work on finding more transparency. This is what we have been working on with the AI Act, because if there is no downstream transparency between the various operators in the chain of value of AI, then the risk of asymmetry and the transportation of responsibilities down the stream will be damaging to the weaker actors. It will strengthen the incumbents and not have a healthy market for AI. So liability, transparency, yes we can have contractual agreements. But only if we have strong safeguards to avoid the lack of information. Otherwise we will just entrench market advantages and we will, I think, suppress innovation. So we need to find a good way. In Europe we are now working on a new liability legislation, AI liability legislation that complements the AI Act and we will be discussing this for sure also in the future in this kind of context. Thank you very much.

Sorina Teleanu: Thank you also, Brando, for highlighting the need for actually whole-of-government and whole-of-society approaches to dealing with the challenges of AI. Any more reflections from our speakers? We have three more minutes.

Muta Asguni: So, I think there were a lot of questions regarding regulation, regarding governance, regarding the definition of AI. I just want to sort of take a step back and highlight an amazing approach that has been taken in the report. In regards of forms of regulation, I think there are a lot of different forms of regulation that has been taken in the report in regards of focusing on the value chain of AI. Because you cannot govern the whole of AI together. You need to actually distribute it into components and look at each component in isolation from the other components. And I also want to mention that we still don’t fully understand AI, right? This is the first technology, maybe not the first, we still don’t understand electricity for example fully, but this is a technology that is giving us answers in a way that is not very transparent. We don’t know why did the model give us that answer. So, I think a change on how we regulate and how we govern such a technology is very much needed. We cannot take a reactive approach, especially when it comes to liability. We also need to serve a proactive approach in the appropriate components within the value chain. So, in the data layer for example, the collection of data, we’re going to take a reactive approach. But in the access to AI for example, maybe we need to consider a more proactive approach when it comes to governance and regulation. And from that, I want to also talk a little bit about interoperability. Because, you know, one of the biggest questions that we get from investors when it comes to investment in Saudi for example, is if I’m compliant with the laws and regulations in country X, am I going to be compliant with the laws and regulations in your country? And this is a very important question. And it’s a very important question, especially when it comes to the GDPR and the differences between the GDPR and the BDPL in KSA. So, having frameworks and interoperability, especially when it comes to data, because data currently is kind of clear, and we can move from data upstream into the value chain of AI from there. Thank you.

Meena Lysko: Thank you. Thank you very much. Perhaps just from my side, I’d like to again just emphasize that in order for us to have a future world and an equal future, sincere and responsible collaboration is crucial. And we need to prioritize these in sustainability design, like putting the report, deployment and governance of generative AI technologies. And maybe a last point, without an environment, there’s no point on collaboration to boost economies or on developing societies. We need to move off from our path of total global destruction. Thank you.

Sorina Teleanu: Thank you also. We have quite a few powerful messages out of these sessions. I hope someone will be taking due notes. If not, we have AI enabled reporting. Jimena?

Jimena Viveros: I suggest very quickly, I just wanted to say that I think now that even though there’s no one single definition to AI, we’re getting, I mean, the technology has been here for over 70 years. So, I mean, we have some understanding of it. And what we’re trying to do now is to whiten the black box in terms of explainable AI and so on. So, all of these things, like we’re trying to do forensics, for example, on the models to see like how they came up with the outputs and so on. So, this is a very important thing that’s going to help us make AI more accountable. And the global governance framework, I think it should be overarching of all of the topics. Obviously, there’s going to be a lot of subset regimes, but they should all be dependent on like the umbrella of governance. And just to finish on the liability, I think the one conclusion we can come to is that if you cannot fully control the effects of a technology, you should accept by the mere fact that you’re using it, that you will be responsible for whatever happens. That will happen in this case. So, I think that should be the general rule that we should keep in mind for now, especially when it comes to the peace and security domain or when there’s human right violations involved. So, high scale or high risk frontier models and all of the other type of decision support systems and autonomous weapon systems. Thank you.

Sorina Teleanu: Thank you. Yves, Anita, any final reflections from you before we wrap up?

Yves Iradukunda : Just to, again, agree with the comment around the liability. I think it goes back also to the emphasis that has been done on awareness and capacity building, because some of the liability may come from the most vulnerable link within our ecosystem. So, that then means that we need to emphasize on partnership, because if that sort of responsible use of some of this method is applied in any one of the jurisdiction, it will not leave the rest of the countries or organizations safe. So, I think, again, an emphasis on building the partnerships that enforce collaboration and partnership to advance some of these values that have been discussed.

Sorina Teleanu: Thank you. Amrita, if you’re still with us and would like to add something? Okay, perhaps not. We are out of time. I’m not even going to try to summarize the many points that have been touched on today, but I’m sure there will be a very comprehensive report by the Policy Network facilitators, and there will also be one from, as I was saying, AI-enabled. I do, again, encourage everyone to take a look at the report, maybe even only the recommendations. There is a chatbot that will allow you to interact with it directly. Looking forward to seeing how the Policy Network will continue its work building on some of the very, very useful and thought-provoking reflections from today. Many thanks to our speakers here and online. Many thanks to you in the room also for your contributions, and our online participants also. Enjoy the rest of the IGF, and let’s see where we get with this AI, humanity, governance, society, and all the implications around them. Thank you so much.

J

Jimena Viveros

Speech speed

147 words per minute

Speech length

842 words

Speech time

342 seconds

Need for global governance framework for AI

Explanation

Jimena Viveros argues for the necessity of a global governance framework for AI. She emphasizes that this framework should be overarching and encompass all topics related to AI governance.

Evidence

She mentions that there will be subset regimes, but they should all fall under the umbrella of global governance.

Major Discussion Point

AI Governance and Regulation

Agreed with

Brando Benifei

Anita Gurumurthy

Agreed on

Need for global governance framework for AI

Importance of state responsibility for AI systems

Explanation

Jimena Viveros emphasizes the need for state responsibility in the production, deployment, and use of AI systems throughout their entire lifecycle. She argues that this is crucial for rebuilding trust in the international system.

Major Discussion Point

Liability and Accountability for AI

Agreed with

Brando Benifei

Anita Gurumurthy

Agreed on

Importance of addressing liability and accountability in AI systems

Differed with

Anita Gurumurthy

Differed on

Scope of liability for AI systems

Challenge of allocating responsibility given opacity of AI systems

Explanation

Viveros highlights the difficulty in allocating responsibility due to the opacity of AI systems. She points out that the ‘black box’ nature of AI makes it challenging to determine how decisions are made.

Evidence

She mentions ongoing efforts to develop explainable AI and forensic techniques to understand how AI models produce their outputs.

Major Discussion Point

Liability and Accountability for AI

B

Brando Benifei

Speech speed

131 words per minute

Speech length

1909 words

Speech time

872 seconds

Importance of domestic policies alongside global governance

Explanation

Brando Benifei emphasizes the need for both domestic policies and global governance for AI. He suggests that while global cooperation is necessary, countries also need to develop their own rules for dealing with AI in their societies.

Major Discussion Point

AI Governance and Regulation

Agreed with

Jimena Viveros

Anita Gurumurthy

Agreed on

Need for global governance framework for AI

Need to focus on concrete AI applications in regulation

Explanation

Benifei argues for focusing on defining and regulating concrete applications of AI rather than getting bogged down in philosophical definitions. He suggests this approach is more practical for regulatory purposes.

Evidence

He mentions that the EU has been working on this approach, and there are ongoing processes at the UN and OECD.

Major Discussion Point

AI Governance and Regulation

Differed with

Muta Asguni

Differed on

Approach to AI regulation

Need for transparency to address liability issues in AI value chain

Explanation

Benifei emphasizes the importance of transparency in the AI value chain to address liability issues. He argues that without downstream transparency, there is a risk of asymmetry and unfair distribution of responsibilities.

Evidence

He mentions that the EU is working on new AI liability legislation to complement the AI Act.

Major Discussion Point

Liability and Accountability for AI

Agreed with

Jimena Viveros

Anita Gurumurthy

Agreed on

Importance of addressing liability and accountability in AI systems

Need to consider AI’s impact on labor markets and upskilling

Explanation

Benifei emphasizes the need to consider AI’s impact on labor markets and the importance of upskilling. He argues that AI will significantly change the job market, requiring adaptation and new skills.

Evidence

He compares the impact of AI to previous industrial revolutions, suggesting it could be as transformative as the introduction of electricity.

Major Discussion Point

Environmental and Social Impacts of AI

Need for common standards and definitions for AI globally

Explanation

Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests this is crucial for enabling interoperability and cooperation between different parts of the world.

Evidence

He mentions ongoing work in various international fora to develop these common standards.

Major Discussion Point

AI and Global Cooperation

Agreed with

Meena Lysko

Yves Iradukunda

Agreed on

Need for collaboration and partnerships in AI development and governance

A

Anita Gurumurthy

Speech speed

140 words per minute

Speech length

752 words

Speech time

320 seconds

Challenges of regulating AI given its transboundary nature

Explanation

Anita Gurumurthy highlights the difficulties in regulating AI due to its cross-border nature. She points out that the opacity of algorithms in cross-border value chains, combined with trade secret protections, can hinder effective regulation.

Evidence

She cites a recent case involving Lyft and Uber where the Washington Supreme Court ruled that reports maintained as trade secrets should be made public in the public interest.

Major Discussion Point

AI Governance and Regulation

Agreed with

Jimena Viveros

Brando Benifei

Agreed on

Need for global governance framework for AI

Need for liability rules for both producers and operators of AI systems

Explanation

Gurumurthy argues for the necessity of liability rules that apply to both producers and operators of AI systems. She emphasizes that a high level of care is needed in designing, testing, and employing AI-based solutions.

Evidence

She provides examples of social welfare systems and government employment of AI systems to illustrate the importance of operator liability.

Major Discussion Point

Liability and Accountability for AI

Agreed with

Jimena Viveros

Brando Benifei

Agreed on

Importance of addressing liability and accountability in AI systems

Differed with

Jimena Viveros

Differed on

Scope of liability for AI systems

M

Muta Asguni

Speech speed

137 words per minute

Speech length

1135 words

Speech time

495 seconds

Importance of proactive and reactive approaches to AI governance

Explanation

Muta Asguni argues for a combination of proactive and reactive approaches to AI governance. He suggests that different components of the AI value chain may require different regulatory approaches.

Evidence

He gives examples of taking a reactive approach to data collection and a more proactive approach to AI access.

Major Discussion Point

AI Governance and Regulation

Differed with

Brando Benifei

Differed on

Approach to AI regulation

Potential of AI to support sustainable development goals

Explanation

Asguni highlights the potential of AI to contribute to sustainable development goals. He suggests that AI can be a tool to improve citizens’ lives in areas such as healthcare, education, and agriculture.

Major Discussion Point

Environmental and Social Impacts of AI

Challenge of regulatory arbitrage between countries

Explanation

Asguni highlights the challenge of regulatory arbitrage between countries in AI governance. He points out that differences in regulations between countries can create complications for businesses and investors.

Evidence

He gives an example of investors asking about compliance with regulations in different countries, specifically mentioning differences between GDPR and BDPL in Saudi Arabia.

Major Discussion Point

AI and Global Cooperation

M

Meena Lysko

Speech speed

124 words per minute

Speech length

968 words

Speech time

467 seconds

Environmental impacts of AI infrastructure and resource extraction

Explanation

Meena Lysko highlights the significant environmental impacts of AI infrastructure and resource extraction. She emphasizes the need to consider these impacts in the development and deployment of AI technologies.

Evidence

She provides detailed examples of environmental damage from mining activities for materials used in AI hardware, such as cobalt mining in the Democratic Republic of Congo and lithium mining in Chile’s Atacama Desert.

Major Discussion Point

Environmental and Social Impacts of AI

Need for sincere collaboration on responsible AI development

Explanation

Lysko emphasizes the importance of sincere and responsible collaboration in the development and governance of AI technologies. She argues that this is crucial for creating an equal future and addressing global challenges.

Major Discussion Point

AI and Global Cooperation

Agreed with

Yves Iradukunda

Brando Benifei

Agreed on

Need for collaboration and partnerships in AI development and governance

Y

Yves Iradukunda

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Importance of addressing inequalities in AI adoption and access

Explanation

Yves Iradukunda highlights the need to address inequalities in AI adoption and access. He emphasizes the importance of building capacity and awareness to ensure equitable development and use of AI technologies.

Major Discussion Point

Environmental and Social Impacts of AI

Importance of partnerships to bridge divides in AI development

Explanation

Iradukunda stresses the importance of partnerships in bridging divides in AI development. He argues that collaboration is essential to enforce responsible use of AI across different jurisdictions.

Major Discussion Point

AI and Global Cooperation

Agreed with

Meena Lysko

Brando Benifei

Agreed on

Need for collaboration and partnerships in AI development and governance

Agreements

Agreement Points

Need for global governance framework for AI

Jimena Viveros

Brando Benifei

Anita Gurumurthy

Need for global governance framework for AI

Importance of domestic policies alongside global governance

Challenges of regulating AI given its transboundary nature

The speakers agree on the necessity of a comprehensive global governance framework for AI, while acknowledging the need for domestic policies and the challenges posed by AI’s transboundary nature.

Importance of addressing liability and accountability in AI systems

Jimena Viveros

Brando Benifei

Anita Gurumurthy

Importance of state responsibility for AI systems

Need for transparency to address liability issues in AI value chain

Need for liability rules for both producers and operators of AI systems

The speakers emphasize the importance of establishing clear liability and accountability mechanisms for AI systems, including state responsibility and transparency in the AI value chain.

Need for collaboration and partnerships in AI development and governance

Meena Lysko

Yves Iradukunda

Brando Benifei

Need for sincere collaboration on responsible AI development

Importance of partnerships to bridge divides in AI development

Need for common standards and definitions for AI globally

The speakers agree on the importance of collaboration, partnerships, and common standards in AI development and governance to address global challenges and bridge divides.

Similar Viewpoints

Both speakers emphasize the need for practical approaches to AI governance, focusing on specific applications and combining proactive and reactive regulatory strategies.

Muta Asguni

Brando Benifei

Importance of proactive and reactive approaches to AI governance

Need to focus on concrete AI applications in regulation

Both speakers highlight the environmental and developmental aspects of AI, recognizing its potential for sustainable development while also acknowledging its environmental impacts.

Meena Lysko

Muta Asguni

Environmental impacts of AI infrastructure and resource extraction

Potential of AI to support sustainable development goals

Unexpected Consensus

Importance of addressing inequalities in AI adoption and access

Yves Iradukunda

Anita Gurumurthy

Brando Benifei

Importance of addressing inequalities in AI adoption and access

Challenges of regulating AI given its transboundary nature

Need to consider AI’s impact on labor markets and upskilling

Despite representing different regions and perspectives, these speakers unexpectedly converged on the importance of addressing inequalities in AI adoption, access, and its impact on labor markets, highlighting a shared concern for equitable AI development.

Overall Assessment

Summary

The main areas of agreement include the need for a global governance framework for AI, the importance of addressing liability and accountability, the necessity of collaboration and partnerships in AI development, and the recognition of AI’s environmental and social impacts.

Consensus level

There is a moderate to high level of consensus among the speakers on the key issues surrounding AI governance. This consensus suggests a growing recognition of the complex challenges posed by AI and the need for coordinated global action. However, differences in emphasis and approach indicate that achieving a unified global framework for AI governance may still face significant challenges.

Differences

Different Viewpoints

Approach to AI regulation

Brando Benifei

Muta Asguni

Need to focus on concrete AI applications in regulation

Importance of proactive and reactive approaches to AI governance

Benifei advocates for focusing on concrete AI applications in regulation, while Asguni suggests a combination of proactive and reactive approaches depending on the component of the AI value chain.

Scope of liability for AI systems

Anita Gurumurthy

Jimena Viveros

Need for liability rules for both producers and operators of AI systems

Importance of state responsibility for AI systems

Gurumurthy emphasizes liability for both producers and operators of AI systems, while Viveros focuses more on state responsibility throughout the AI lifecycle.

Unexpected Differences

Emphasis on different aspects of AI governance

Jimena Viveros

Yves Iradukunda

Importance of state responsibility for AI systems

Importance of partnerships to bridge divides in AI development

While both speakers discuss AI governance, their focus is unexpectedly different. Viveros emphasizes state responsibility, while Iradukunda stresses the importance of partnerships and collaboration. This highlights the complexity of AI governance and the various approaches that can be taken.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI regulation, the scope of liability for AI systems, and the balance between global governance and domestic policies.

difference_level

The level of disagreement among the speakers is moderate. While there are differences in emphasis and approach, there is a general consensus on the need for AI governance and regulation. These differences reflect the complexity of AI governance and the various perspectives that need to be considered in developing effective policies and frameworks. The implications of these disagreements suggest that a multifaceted approach to AI governance may be necessary, incorporating elements from various viewpoints to create a comprehensive and effective regulatory framework.

Partial Agreements

Partial Agreements

Both speakers agree on the need for global governance of AI, but they differ in their emphasis. Benifei stresses the importance of domestic policies alongside global governance, while Asguni highlights the challenges of regulatory arbitrage between countries.

Brando Benifei

Muta Asguni

Importance of domestic policies alongside global governance

Challenge of regulatory arbitrage between countries

Both speakers address the environmental and developmental aspects of AI, but from different angles. Lysko focuses on the negative environmental impacts of AI infrastructure, while Asguni emphasizes the potential of AI to support sustainable development goals.

Meena Lysko

Muta Asguni

Environmental impacts of AI infrastructure and resource extraction

Potential of AI to support sustainable development goals

Similar Viewpoints

Both speakers emphasize the need for practical approaches to AI governance, focusing on specific applications and combining proactive and reactive regulatory strategies.

Muta Asguni

Brando Benifei

Importance of proactive and reactive approaches to AI governance

Need to focus on concrete AI applications in regulation

Both speakers highlight the environmental and developmental aspects of AI, recognizing its potential for sustainable development while also acknowledging its environmental impacts.

Meena Lysko

Muta Asguni

Environmental impacts of AI infrastructure and resource extraction

Potential of AI to support sustainable development goals

Takeaways

Key Takeaways

There is a need for global governance and regulation of AI, balanced with domestic policies

Liability and accountability frameworks for AI need to address both producers and operators

The environmental and social impacts of AI, including on labor markets and inequality, must be considered

Global cooperation, common standards, and partnerships are crucial for responsible AI development

AI governance should aim to harness AI’s potential for sustainable development while mitigating risks

Resolutions and Action Items

Continue work of the Policy Network on AI to build on insights from this discussion

Encourage stakeholders to review the full Policy Network on AI report and its recommendations

Explore use of the AI chatbot created to allow interaction with the report’s contents

Unresolved Issues

How to achieve a binding global treaty or governance framework for AI

How to balance proactive and reactive approaches to AI regulation

How to address regulatory arbitrage between countries, especially Global North and South

How to define AI in a way that allows for effective governance

How to ensure transparency and explainability of AI systems for accountability purposes

How to protect against misuse of AI, especially in military applications

Suggested Compromises

Focus on regulating concrete AI applications rather than trying to define AI as a whole

Adopt a value chain approach to AI governance, addressing different components separately

Balance global governance frameworks with flexibility for domestic implementation

Combine proactive and reactive regulatory approaches for different aspects of AI

Thought Provoking Comments

Digital technology must serve humanity, not the other way around.

speaker

UN Secretary General (quoted by Sorina Teleanu)

reason

This concise statement cuts to the heart of the ethical considerations around AI governance, framing the discussion in terms of human-centric values.

impact

It refocused the conversation on the fundamental purpose and ethics of AI development, beyond just technical or policy considerations.

We need to look at this very carefully. The other thing I want to say is that the recommendations in the environment section could also look at very useful concepts coming from international environmental law, you know, the Biodiversity Convention, common but differentiated responsibilities because the financing that is needed for AI infrastructures will require us to adopt a gradient approach.

speaker

Anita Gurumurthy

reason

This comment introduced important environmental and legal perspectives that had not been previously discussed, highlighting the need for a nuanced, global approach.

impact

It broadened the scope of the discussion to include environmental concerns and international legal frameworks, leading to more holistic consideration of AI governance.

We can’t talk about responsible AI as AI isolated from everything else that we do in our lives. I think when you think about AI as a technology, we also need to reflect about why AI to begin with, why technology to begin with, and what has been the impact of technology all along, before even the AI came in.

speaker

Yves Iradukunda

reason

This comment challenged participants to consider AI in a broader historical and societal context, rather than as an isolated phenomenon.

impact

It shifted the discussion towards a more holistic view of AI’s role in society and its relationship to other technologies and social issues.

Today, we’re using about 7 gigawatts of electrical power in data centers in the world today. This is projected to grow to 63 gigawatts by 2030. In just five years, we’re expected to grow and consume 10 times the electricity that we consume today for the use of data centers.

speaker

Muta Asguni

reason

This comment provided concrete data on the environmental impact of AI, bringing a tangible dimension to the discussion of sustainability.

impact

It grounded the conversation in real-world implications and highlighted the urgency of addressing AI’s environmental impact.

Now in order for us to govern something, don’t we have to define it first? I mean, we all talk about artificial intelligence and what it is and what it can do good or bad to us. But until now, I cannot see a correct and definite definition for AI.

speaker

Riyad Najm (audience member)

reason

This question challenged a fundamental assumption of the discussion, pointing out the lack of a clear, agreed-upon definition of AI.

impact

It prompted speakers to address the challenge of defining AI for governance purposes, leading to a more nuanced discussion of how to approach regulation.

Overall Assessment

These key comments shaped the discussion by broadening its scope beyond technical and policy considerations to include ethical, environmental, historical, and definitional challenges. They pushed participants to consider AI governance in a more holistic, global context, while also highlighting the urgency of addressing concrete impacts. The discussion evolved from specific policy recommendations to grappling with fundamental questions about the nature of AI and its role in society.

Follow-up Questions

How feasible is a global governance regime for AI?

speaker

Sorina Teleanu

explanation

This is important to explore concrete steps for establishing international cooperation on AI governance, given current challenges with multilateralism.

What is the overarching goal for AI governance?

speaker

Audience member

explanation

Defining a clear goal is crucial for aligning global efforts on AI governance and guiding policy development.

How can we guide AI innovation to accelerate implementation of sustainable development goals?

speaker

Audience member

explanation

This explores how to harness AI’s potential for addressing global challenges while mitigating risks.

How can we develop a comprehensive governance framework to regulate the behavior of AI users?

speaker

Audience member

explanation

This addresses the need to govern not just AI systems, but also how humans interact with and use AI technologies.

How to best identify where liability would lie in case issues arise with interoperating AI systems?

speaker

Ansgar Kuhne

explanation

This explores the complex legal challenges of assigning responsibility in interconnected AI systems.

What is the correct and definite definition of AI?

speaker

Riyad Najm

explanation

A clear definition is necessary to properly scope and implement AI governance efforts.

How do we address the regulatory arbitrage between countries, especially between the global north and south?

speaker

Online audience member

explanation

This explores how to create equitable AI governance given different national contexts and capabilities.

How do we safeguard against the wrong usage of AI for military purposes?

speaker

Online audience member

explanation

This addresses critical concerns about AI’s dual-use nature and potential misuse in warfare.

How do we protect ourselves, especially young generations, from harms produced by AI in online interactions?

speaker

Online audience member (Omar)

explanation

This explores safeguards needed to protect vulnerable groups from AI-enabled online harms.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

AI Assistant for the PNAI report

DC-CIV & DC-NN: From Internet Openness to AI Openness

DC-CIV & DC-NN: From Internet Openness to AI Openness

Session at a Glance

Summary

This discussion focused on exploring the potential application of core Internet values and principles to AI governance and openness. Participants debated whether concepts like openness, interoperability, and non-discrimination that have shaped Internet development could be extended to AI systems. Many speakers emphasized that while AI and the Internet are distinct, there are important interconnections to consider as AI increasingly becomes an intermediary layer between users and online content.


Key points of discussion included the need for transparency and accountability in AI systems, concerns about AI amplifying existing biases and power imbalances, and debates over appropriate regulatory approaches. Some argued for applying Internet openness principles to AI, while others cautioned against direct equivalence given AI’s distinct nature. The importance of human rights frameworks in AI governance was highlighted, as was the need to consider societal and collective rights alongside individual protections.


Participants explored tensions between permissionless innovation and precautionary regulation for AI. There were differing views on the degree of standardization and interoperability needed for AI systems compared to Internet infrastructure. The discussion touched on challenges around AI safety, liability, and the concentration of AI development among a few large companies.


Overall, the session illuminated the complex considerations involved in developing governance frameworks for AI that balance innovation with responsibility. While no clear consensus emerged, the dialogue highlighted important areas for further exploration as AI governance evolves, including potential lessons from Internet governance experiences.


Keypoints

Major discussion points:


– The relationship and differences between AI and the internet


– Applying internet governance principles like openness and interoperability to AI


– The need for AI regulation and accountability, especially regarding risks and harms


– Balancing innovation with safety and human rights considerations in AI development


– The impact of AI on internet usage and information access


The overall purpose of the discussion was to explore whether and how core internet governance principles and values could be applied to AI governance and regulation. The participants aimed to identify lessons learned from internet governance that could inform approaches to AI.


The tone of the discussion was thoughtful and analytical, with participants offering different perspectives and occasionally disagreeing. There was a sense of grappling with complex issues without clear solutions. The tone became slightly more urgent near the end when discussing concrete next steps and the need for action on AI governance.


Speakers

– Luca Belli: Co-moderator


– Olivier Crépin-Leblond: Co-moderator


– Renata Mielli: Advisor at the Ministry of Science and Technology of Brazil, Chairwoman of CGI (Brazilian Internet Steering Committee)


– Anita Gurumurthy: IT4Change


– Sandrine Elmi Hersi: Leads ARCEP’s (French Regulator) Unit on Internet Openness


– Vint Cerf: Internet pioneer, Chief Internet Evangelist and Vice President at Google


– Yik Chan Chin: Professor at University of Beijing, leads work of PNAE (Policy Network on AI)


– Sandra Mahannan: Data scientist analyst at Unicorn Group of Companies


– Alejandro Pisanty:


– Wanda Muñoz: Member of the feminist AI research network in UNESCO’s Women into Ethical AI platform


Additional speakers:


– Desiree: Audience member


Full session report

Expanded Summary of AI Governance Discussion


This discussion explored the potential application of core Internet values and principles to AI governance and openness. Participants debated whether concepts like openness, interoperability, and non-discrimination that have shaped Internet development could be extended to AI systems. The dialogue illuminated the complex considerations involved in developing governance frameworks for AI that balance innovation with responsibility.


Key Themes and Debates


1. Relationship between AI and Internet Governance


A central point of discussion was the relationship between AI and Internet governance principles. Vint Cerf, an Internet pioneer, emphasised that AI and the Internet are fundamentally different technologies requiring distinct governance approaches. However, other speakers like Luca Belli and Renata Mielli argued that some core Internet values, such as transparency and accountability, could apply to AI governance. Mielli specifically mentioned the Brazilian Internet Steering Committee’s principles as potentially applicable to AI governance.


There was general agreement that AI is creating a new intermediary layer between users and Internet content, which could significantly change how users access and interact with online information. Sandrine Elmi Hersi noted that this could potentially restrict user agency and transparency in accessing online information. She highlighted predictions that search engine traffic could decline by 25% by 2026 due to AI chatbots, emphasizing the impact of generative AI on internet openness.


2. Openness and Interoperability in AI Systems


The concept of openness in AI systems sparked debate. Anita Gurumurthy argued that openness in AI is complex and doesn’t necessarily lead to transparency or democratisation. She critiqued the term “open” as potentially misleading in the context of AI. Vint Cerf pointed out that AI systems are mostly proprietary and not interoperable, unlike Internet protocols. Yik Chan Chin added that standardisation and interoperability of AI systems are extremely difficult currently.


3. Human Rights and AI Governance


Wanda Muñoz strongly advocated for a human rights-based approach to AI governance, beyond just ethics and principles. She emphasized the need for accountability, remedy, and reparation when violations of human rights result from AI use. This perspective shifted the discussion towards considering AI governance in terms of concrete human rights obligations and mechanisms for redress.


4. Regulation and Governance Approaches


Participants offered various perspectives on how to approach AI regulation:


– Vint Cerf suggested that accountability is important in the AI world, and parties offering AI-based applications should be held accountable for any risks these systems pose.


– Sandra Mahannan proposed that regulation should focus more on AI developers and models rather than users, highlighting the importance of data quality and challenges faced by smaller players in the AI industry.


– Yik Chan Chin emphasised the need for global coordination on AI risk categorisation, liability frameworks, and training data standards.


– Alejandro Pisanty cautioned against trying to regulate AI in general, suggesting instead to focus on specific applications. He also stressed the importance of separating the effects of human agency from the technology itself in AI governance.


5. Impacts and Risks of AI Systems


Several speakers highlighted potential risks and impacts of AI systems:


– Wanda Muñoz warned that AI systems can perpetuate and amplify existing societal biases and discrimination.


– Yik Chan Chin noted that AI poses new cybersecurity risks to Internet infrastructure.


– Alejandro Pisanty raised concerns that generative AI could lead to loss of information detail and accuracy.


Thought-Provoking Insights


Several comments shifted the discussion in notable ways:


1. Anita Gurumurthy challenged the current paradigm of data and wealth concentration in the tech industry, suggesting alternative paths for more distributed value creation.


2. Vint Cerf’s distinction between AI and the Internet prompted more careful consideration of which Internet governance principles may be applicable to AI. He also highlighted potential benefits of AI in improving our ability to ingest, analyze, and summarize information.


3. Sandrine Elmi Hersi’s insights on how AI is fundamentally changing user interaction with Internet content prompted discussion about implications for governance.


Unresolved Issues and Future Directions


The discussion left several key issues unresolved:


1. Balancing innovation and risk mitigation in AI regulation


2. Extent to which Internet governance principles can or should be applied to AI governance


3. Ensuring AI systems enhance rather than restrict access to diverse online information


4. Approaches for global coordination on AI governance given differing national/regional priorities


Participants suggested developing a joint report for the next Internet Governance Forum on elements that can enable an open AI environment. They also proposed continuing collaboration between the Dynamic Coalition on Core Internet Values and Dynamic Coalition on Net Neutrality on AI governance issues.


Follow-up questions raised by participants highlighted areas for further exploration, including:


– Balancing regional diversity and harmonisation needs in AI governance


– Strengthening multi-stakeholder involvement in AI governance


– Regulating AI from the developer angle


– Incorporating feminist and diverse perspectives into core values for AI governance


– Developing international norms for liability and accountability in AI


– Regulating AI in specific verticals or sectors


– Ensuring transparency in complex AI systems like large language models


– Addressing potential loss of specific details in AI-generated content


– Approaching AI regulation in relation to internet infrastructure


In conclusion, the discussion highlighted the complex and multifaceted nature of AI governance, involving technical, legal, and human rights considerations. While there was agreement on the need for AI governance, developing a unified approach will require balancing multiple perspectives and priorities, with a strong emphasis on human rights, accountability, and the unique challenges posed by AI as a new intermediary layer in internet interactions.


Session Transcript

Luca Belli: So, let me just start to introduce the panelists and introduce a little bit of the team, and then I will give the floor to my friend and co-moderator, Olivier Crepin-Leblanc. So, our panelists of today, you can already see them on the screen, the remote panelists, and here with us, the on-site panelists, we will start with Renata Miele, who is advisor at the Ministry of Science and Technology of Brazil, and also the chairwoman of the CGI, the Comitê Gestoral Internat, the Brazilian Internet Steering Committee. Then we will have Sandrine Hersey, who leads the ARCEP, the French Regulators Unit on Internet Openness. Then we will have Sandra Mahanen. Is Sandra with us or not? Yes, so that works for the Unicom Group and for Omefe Technologies. Then we will have Mr. Winsor, that needs a few introductions. He is an internet pioneer and also chief internet evangelist, if I’m not mistaken, also vice president of Google. And then we will have Ik Chan, who is professor at the University of Beijing, and also leads work of the PNAE, the Policy Network on AI. And last but not least, we should have our friend Alejandro Pisanti somewhere. Sorry, there is also Vanda Munoz. Sorry. So, do we have Alejandro Pisanti online? I don’t see him. He’s online. I don’t know whether he is. is online but he’s not visible yet but we hope he will soon and then we will have also Vandamunas, she’s already online of course and then last but not least Anita Gurmundti from IT4Change. All right, now let me just introduce a little bit of the topic and why also we have combined these two session of the Dynamic Coalition on Core Internet Values and on Net Neutrality and Internet Openness because we had very similar proposals for this year’s IGF sessions to bring together individuals that have been working for in some cases decades on core internet values and at least one decade on Internet Openness, Net Neutrality and similar issues to discuss which kind of lessons can be learned from Internet Openness and Core Internet Values if any and can be transposed to the current discussions on AI Governance and AI Openness. We know very well that Internet and AI are two different beasts but there is a lot of things that can overlap and to do AI you need somehow Internet connections and Internet as a whole and a lot of things that happen on the Internet especially most applications nowadays rely on some sort of AI so there is a deep connection between the two but they are not exactly the same thing and many of the core Internet values or Internet Openness principles that we have been discussing for the past decade or so may apply or not. Some things may be more intuitive like transparency. Transparency of Internet traffic management, we have discussed this on Net Neutrality issues for a decade, is essential to understand to what extent the ISP is managing traffic and is this reasonable traffic management, is this blocking or throttling unduly some specific traffic or not? This is also essential to understand which kind of decisions are taken by this AI system we may rely upon for an enormously important part of our lives, from getting loans and credit in banks, to being identified and maybe arrested by police as criminals, or use our services with face recognition. So this kind of transparency, although different, be it in a rule-based system or in a very intensive system like LLMs that relies on a lot of computational capacity and are more predictive and probabilistic than deterministic, in both cases, we need to understand how they function. And this is not something that from a rule of law and due process perspective, we can accept to simply say, you know, we don’t know how it works. I understand that in many cases, we don’t know how it works, but this kind of transparency is essential for the counterpart of our accountability, accountability with users, with regulators, with society at large. And then we have a lot of debates about interoperability and permissionless innovation, which are the core of the internet functioning, but most of AI systems are not interoperable. And actually, the most advanced are also developed by a very small number of large corporations, which may lead us to the kind of concentration that non-discrimination and decentralization that are at the core of the internet and at the core of net neutrality and at the core of internet openness aim at avoiding. And to conclude also a very concrete example of how this concentration and the lack of net neutrality can even be put on steroid by AI. to some extent. We have very good examples now. We have been debating zero rating and its compatibility or incompatibility with NetNeutrality for the past decades almost. And we know that in most of the Global South, people access the internet primarily through meta family of apps, especially WhatsApp. And the fact that now WhatsApp includes at the very top of his homepage, meta AI, means that de facto most of the people in the Global South, they have as an internet experience, primarily WhatsApp, and will have as AI experience, primarily meta AI, period. That is the reality of most of the people which are sadly poor. And we only have that as an introduction to AI. And we also work to train for free that specific AI. So those are a lot of considerations we have to have in our mind to understand to what extent we can transpose internet openness principles, to what extent we can learn lessons from regulation that already exists, and to some extent may have failed over the past decades in terms of internet openness, and also which kind of governance solutions we can put forward in order to shape the evolving AI governance and the hopeful, maybe, AI openness. At this point, this would be the moment where my co-moderator, Olivier, would provide his introductory remarks. But I’m seeing him intensely speaking with our remote moderation team. So as the show must go on, I think we can start with the first speaker that we have on our agenda, that if I’m not mistaken, is Renata Mielli. I think, Olivier, do you want to give us your introductory story? Sorry, Renata. Olivier arrived again, and he is going to provide


Olivier Crépin-Leblond: his introductory remarks, and then we will pass the floor to Renata. Yes, apologies for this. Blanc speaking and I’ve been running back and forth trying to get all of our panelists remote panelists because we’ve got quite a few to have camera access, be able to be recognized by us and so on. So sorry for the running in and out but thank you for the introduction and it’s really great to have a meeting of both organizations, both dynamic coalitions together for a topic which is of such interest. I’m going to give just a quick few words about the core internet values because I’m not quite sure that everyone in the room knows about those. I see some new faces as well and this dynamic coalition started quite a while ago based on the premise that the internet works due to a certain number of internet fundamentals that allowed the internet to thrive and to become what it is today. And those are quite basic actually, they’re all technical in nature and so if I just look at them, the first one is the point that the internet is a global resource, it’s an open medium, open to all, it’s interoperable which means that every machine around the network is able to talk to other machines and I’m saying machines when we started it was every computer but now it is of course you’re speaking about all sorts of devices on this. It’s decentralized, there’s no overall central control of the internet, short of the single naming system, the DNS, apart from this there’s so many organizations that are involved in its coordination. It’s also end-to-end so it’s not application specific, you can put any type of application at the end and make it work with something at the other end so the actual features reside in the end nodes and not with a centralized control of a network and that makes it user-centric. End users have the choice of what services they want to access, how they want to access them and these days of course course, using mobile devices, they’re able to upload into their or download into their mobile devices, any type of application that they want to run. And they don’t really think about the internet running behind the scenes. And of course, the most important thing, it’s robust and reliable. It’s to think that there are so many people now on a network, which started with only a few thousand people, and then a few hundred thousand, and then a million and a few million. And some people back in the day thinking this is going to collapse. Well, it’s still working. And it’s still doing very well and very reliable considering the number of people and the amount of the number of people that are trying to break it, the amount of malware and everything else that is on this. So it’s pretty robust, pretty reliable. A few years ago, we also added another core value, which was that of safety. The right to be safe was one of the things that we felt was important to add as a core value. In other words, being able to allow for cyber measures to make sure the network itself doesn’t collapse and all sorts of ways to not content control as such, but to make sure that you are safe when you use the internet and you’re not going to be completely overblown by the amount of malware and everything that’s out there. And that’s something which I think we were quite successful in doing. All the antivirus software, all of the devices, all of the things that we now have on the net to make it work. These are very open values. And of course, they’re open for people to adopt. And of course, we have seen erosion of these over the past years. The openness of the network has been put to test on many occasions. There’s also been certainly, as far as network neutrality is concerned, some traffic shaping and some things affecting But on the whole, it’s still global, it’s still interoperable, it’s still got the basic values that we’ve just spoken about now. And whilst we are seeing an erosion, we’re also seeing that it’s quite well understood by players out there. And we’re looking at various different levels. So the telecommunication companies, governments, the operators, the content providers and so on, that we have this equilibrium, if you want. I can’t say a sweet spot because it keeps on moving forward, but this equilibrium today, and we hope that we will be able to continue having this equilibrium tomorrow that will make this internet both innovative, keep the innovation, but at the same time also make it as safe as possible and as stable as possible. Because that’s really something that now, with a network that is so important in everyone’s lives, we need to make sure we have for the future. The economic implications and societal implications to having a broken internet are too big for us not to do this. So hopefully that’s a message that’s been well understood. But now we have AI. AI has come up and seems to have made an absolute revolution. Forget everything else that’s happened before, we need to regulate, regulate, regulate. That’s what some are saying. I think today’s session is going to be looking at this. Do we need to regulate, regulate, regulate, or can we learn some lessons from the core internet values and how the internet has thrived to be what it is today and apply it to artificial intelligence? Well, let’s find out.


Luca Belli: Let’s start with our first panelist, Renata Miele from the Ministry of Science and Technology. Please, Renata, the floor is yours.


Renata Mielli: Hello. Thank you, Luca. Thank you, Olivier. It is a pleasure to be here discussing this interesting approach. about AI and Internet, I am very happy with this bridge that you bring to us to reflect about AI and Internet and the core values that we have to have in mind when we talk about this new and pervasive technology. So, thank you very much, my colleagues. I am going to bring some historical perspective and try to make this bridge between the core values that we have in Brazil to Internet into AI. Well, long-term analysis of Internet development shows that Internet mostly benefited from a core set of original values that drove its creation, such as openness, permission, innovation, interoperability and others. The Internet and its technologies historically fostered an interoperable environment guided by solid and universally accepted standards that enabled pervasiveness, shared best practices, collaboration and collective benefit from a unique network deployed worldwide. Just as other types of technology, the development of the Internet was also based in academic collaboration networks with researchers that worked together to deploy the initial stages of the global network. Artificial intelligence has been seen as a game-changer for a broad range of fields, from data science and news media to agriculture and related industries. In this sense, it is safe to start from the assumptions that AI will greatly impact society in several terms, economically, politically, environmentally, socially and many others. The harder challenge we have is to drive this evolution in such a way that it is possible to do it. that positive impacts superpass the negative ones with AI being used to empower people in society for a more inclusive and fair future for all, and the first step for that is to have a clear consensus on fundamental principles for AI development and governance. The Brazilian Internet Steering Committee, CGI.br, outlined the set of ten principles for the governance and use of the internet in Brazil. Our so-called Internet Decalogue provides core foundations for internet governance and is very much in line with the core internet values proposed by the IGF’s dynamic coalition, in a way that we believe can be leveraged to also meet expectations for the governance and development of AI systems. Principles such as standardization and interoperability are important for opening development processes, allowing for exchange and joint collaboration among various global stakeholders, strengthening research and development in the field. In the same sense, AI governance must be founded on human rights provisions, taking into account its multipurpose and cross-border applications. Principles such as innovation and democratic collaborative governance can be also considered as foundations for artificial intelligence in order to encourage the production of new technologies to promote stakeholder governance, with more transparency and inclusion through every related processes. Same goes for transparency, diversity, multilingual and inclusion, which can be interpreted in the context of AI systems development from the perspective that these technologies should be developed using technical and ethical safe, secure and trustworthy standards, curbing discrimination. discriminatory purposes. At the same time, the legal and regulatory environments hold particular relevance for the interpretation that the development and use of artificial intelligence systems should be based on effective laws and regulations in order to foster the collaborative nature and benefits of this technology while safeguarding basic rights. It should also be noted that adopting a principles-based approach ends up generating more general guidelines, which can lead to implementation difficulties. However, this should be balanced with the need for each country to adopt AI governance to its local realities and peculiarities, in addition to granting greater sovereignty over how this governance should take place in terms of politics, economics, and social development. As a bottom line, we could think of AI development and governance as a priority topic for a more intense south-to-south collaboration, fostering the creation and expansion of research and development, as well as open and responsible innovation networks with long-term cooperation agreements and technology transfer in order to corroborate sovereignty and solid development frameworks for the global south. It is important to not try reinvent the wheel and draw upon good practices that already exist, such as the global articulations across the IGF and WSIS processes, or even more stable sets of proposed frameworks, such as net mundial principles and guidelines, that can orient the evolution of the ecosystem to be even more inclusive on results-oriented. Last but not least, existing coalitions, political groups, and other stakeholders should be included in this process. could be leveraged as platforms for collaboration within digital governance and cooperation as a whole, including in traditional multilateral spaces such as the BRICS for D20. Brazil, for example, held the presidency of G20 in 2024 this year, and will do the same with BRICS in 2025. We believe that, in both cases, there will be good opportunities for foresting best practices in digital governance collaboration across different countries. Thank you very much.


Olivier Crépin-Leblond: Thank you very much, Renata. Wow, what a start. A lot of points being made here. I’ll ask that we all try and stick to our five minutes, because otherwise we’ll run over, we can speak for hours on these topics. But next is Anita Gurumurthy, IT4Change, and Abdelkader, are they ready?


Anita Gurumurthy: Yes, I’m here. Anita? Can you hear me?


Olivier Crépin-Leblond: She’s there, and she works, she can speak, yes? Perfect. Go ahead, Anita. Fantastic, we can hear you, yes.


Anita Gurumurthy: You can hear me, I hope. Yeah. All right. So, thank you very much. I just heard that from Renata, and also note this wonderful point that Mr. Vint Cerf has made, that the Internet is used to access AI applications, but operationally AI systems don’t need the Internet to function. I mean, I think we are making reference to the fact that, in many ways, algorithms predated the Internet or the Internet-based revolution. However, the fact of the matter is, just like the intimate relationship between time and space, or space and time, we have a relationship between the Internet and contemporary AI, which Mr. Cerf calls agentic AI. Allow me to be a little bit more specific. critical of openness itself, because I think when I open up my house, what I mean is everyone is welcome. But I think the ideas of the open Internet and open AI do not necessarily, you know, map on to this kind of sentiment. So the term open Internet is used very frequently, but it doesn’t have an universally accepted definition. And that is because, as all of us know, and none of us needs an introduction about the geoeconomics of the data paradigm here, we see that the data collection has become pivotal when we talk about the Internet paradigm. And it’s used either to target ads in large proportions or build products. And only a handful of players with the scale to meaningfully pull this off. So the result is a series of competing walled gardens, if you will. And they, of course, don’t look like the idealized Internet we started with. And today’s technology runs on a string, I would say, of closed networks, app stores, social networks, algorithmic feeds. And those networks have become far more powerful than the web, in large part by limiting what you can see and what you can distribute. So the basic promise of the Internet revolution, the scale, the possibility is, well, I mean, I would say it’s not plausible at this conjuncture. Alongside all of this, the possibilities of community and solidarity haven’t died. Thank God for that. Because we have the open source communities, there are open knowledge communities. And of course, all of these remain open and vulnerable, unfortunately, to capitalist cannibalization and to state authoritarianism. So that is a bit of bemoaning the state of the Internet. And all of this points to an important thing, which is that instead of an economic order, that could have leveraged the global Internet for the global commons of a data paradigm. We now have centralized data value creation by a handful of transnational platform companies, and we could actually have had, as Benkler pointed out long ago, different form of wealth creation. Now, I come to the openness in AI, and I’m cognizant of the five minutes that I have. I think it’s worthwhile to look at Irene Salomon’s analysis of AI labeled open. What is open AI? We’re actually talking about a long gradient, right? And everything is open. You can talk about open as if it’s one thing, but you could actually have something with very minimal transparency and reusability attributes. So that could also be open, and therefore open is not necessarily open to scrutiny. And the critiques that even others, other scholars like Meredith Whitaker, mount against this paradigm is that we don’t necessarily democratize or extend access when you talk about openness. And openness doesn’t necessarily lower costs for large AI systems at scale. Openness doesn’t necessarily contribute to scrutability, and it often allows for systemic exploitation of creators’ labor. So where do we go from here? I mean, it’s very sobering that GPT-4, for instance, when it was published by OpenAI, they explicitly declined to release details about its architecture, including model size, hardware, training, compute, data set construction, and training methods. So here we are. What we need to do is restore ideas of transparency, reusability, extensibility, and the idea of access to training data. And it’s politicized form. If we don’t do this, then we will be lost. And my last submission here is to be able to politicize each of these notions and make them part of ex-ante public participation, we need to turn to environmental law and look at the Aarhus Convention, for instance. We need a societal and collective rights approach to openness, whether it’s the open internet or open AI. And collective rights, societal rights that do not preclude individual rights or liability for harms cost to individuals. I’m not precluding that, but we still need to understand what will benefit society and what will harm society. So we are looking really at a societal framework for rights, which is super liberal, which doesn’t just always come back to my product cost you harm, but really looking at the ethics and values of societies, of sovereignty of the people, you know, as a collective. And here, I think we should understand that there are three cornerstones in substantive equality, the right to dignity and freedom from misrecognition, the right to meaningful participation in the AI paradigm, not just in a model, and the right to effective inclusion and the gains of AI innovation, which is for all countries and not just a couple. Thank you.


Luca Belli: Thank you very much, Anita, for this reality shock and for remembering us that actually, behind the label of openness or open, one has to look at the substance of things. And the very good example of open AI, whose practice and architecture is really antithetic to openness, despite having open in its own name, is something that allow us to think about the fact that if we want to have market players, a very large one, including and multi billion corporations stick to their promises, maybe some type of regulation is actually essential. And here is It’s very good to then pass the floor to Sandrine Elmiersi, because RCEP has been very vocal and leading AI openness over the past years of implementation since 2015 of the open internet regulation in Europe. So it’s very good to, based on the experience you had over the past decade, understand which kind of mistakes might have been done, which kind of limits may exist, and which kind of lessons we can learn to better shape openness of AI. Please, Sandrine, the floor is yours.


Sandrine Elmi Hersi: Thank you. First of all, let me thank the organizers of this session for this important conversation on how to incorporate internet fundamental values in the development of AI. So I will focus this introduction on the generative AI impact on the concept of internet openness. So as we know, generative AI is a versatile innovation with vast potential across many sectors, and more broadly, for the economy and society. These technologies also raise several legal, societal, and technical issues that are progressively being tackled. But we can see that policymakers, notably at EU level, at European Union level, have primarily focused their action and initiatives on the risks of these systems in terms of security and data protection, as seen in the EU AI Act. But the impact of these technologies on internet openness and the potential restrictions this application could bring on users’ capacity to access, share, and configure the content they do have access through the internet have only started to become a topic of attention in the public debate now. And yet, generative AI applications are becoming a new intermediary layer between users and Internet content, increasingly unavoidable. For example, a study published by Gartner this year predicts that search engine traffic could decline by 25% by 2026 due to the rise of AI chatbots. And aside from these conversation tools, generative AI applications and generative AI systems are increasingly being adopted by traditional digital services providers, including through well-established platforms like search engines, social media, and connected devices. From this perspective, we can say that generative AI could soon become an integral part of most users’ digital activities, potentially serving as a primary gateway for accessing content and information online. So thanks to their user-friendly interfaces, generative AI tools open up new possibilities to a wider range of users. With generative AI, it has never been easier to create text, images, or even code. However, we must also consider the challenges and risks in terms of Internet openness and users’ empowerment. So at RCEP, we have for a long time emphasized that Internet services providers, the main focus of EU open Internet regulation, are not the only players that may negatively impact Internet openness, understood as a right for end-users and users in general to access. and share freely the content of their choice on the Internet. In 2018, we published a report highlighting that, complementary to net neutrality obligations, the impact of search engines, operating systems, and other structuring platforms on devices and Internet openness should be tackled with appropriate regulatory replies. At EU level, the Digital Markets Act, adopted in 2022, has introduced new tools for promoting non-discrimination, transparency, and interoperability measures addressing some of the problems raised by gatekeepers. But for us, now it is the time to assess the impact of generative AI on Internet openness and users’ empowerment. This is why we have started to work on the issue and already sent first observations to the European Commission from our experience on net neutrality. And we can already see effects of generative AI applications on how users access and share content online. Just some examples, suppose the transition from search engines to response engines is not a neutral evolution and could restrict the user experience. As the interface of generative AI tools offers users little control and agency over the content they access to, providing an ad hoc single response with often a lack of transparency, no clear sources, and no ability for users to adjust the setting, we must also take into account the inherent technical limitations. of AI, including biases, lack of explicability, and risk of hallucinations that are now becoming part of the digital landscape. And generative AI development also brings fundamental changes to content creation, which could impact how information is shared and the diversity and richness of content available online, because as generative AI tools become primary gateways for content access, AI providers could also capture the essential part of the economic and symbolic value from content dissemination, which could threaten the capacity and willingness of traditional content providers such as media or digital commons to produce and share original content to the benefit of the economy and society. While these developments are concerning, at ARCEP we are convinced, as other persons around the table today, that we can create the conditions necessary to apply the presymposal of open internet to artificial intelligence in terms of transparency, user empowerment, and open innovation. For information, we will publish next year a set of recommendations in that perspective and are looking for partners to work towards this task. And to conclude, a final word, to say that we believe that we have the collective responsibility to shape the future of artificial intelligence governance in a way that secures its development as a common good. This means notably the adoption of high standards in terms of openness, but also pro-innovation, sustainability, and safety. So thank you again for the opportunity to be here and I look forward to the discussion ahead.


Olivier Crépin-Leblond: Thank you very much Sandrine and thank you for sharing the perspective of a regulator. We now have a perspective from a business community and that’s Sandra, data scientist analyst at Unicorn Group of Companies. Sandrine, you should be able to unmute and take over. Sandra? We cannot hear Sandra. Can we check why we cannot hear Sandra? Sandra, can you try to speak again? No, we can’t. We’re not hearing Sandra online either. We’re not hearing her online. Should we maybe… Sandra, can you do a last attempt? Yes, can you try to speak? There’s a problem with her mic. Yes, there is a problem with… While we try to solve this in the interest of time, let’s move ahead to the next speaker and then we will have Sandra maybe later.


Vint Cerf: So, Vint Cerf, please, the floor is yours. Well, thank you very much for asking me to join you today. This is a very, very interesting topic. I will say that AI and Internet are not the same thing. And I think that the standardization which has made the Internet so useful may not be applicable to artificial intelligence, at least not yet. What you see is extremely complex and large models that operate very differently from each other. And the real intellectual property is a combination of the training that we have. material, and the architecture and weights of each of the models that are being generated, and those are being treated largely as proprietary. So open access to an AI system is not the same as access to its insides and its training material and its detailed weights and structure. So we should be a little careful not to try to equate the things that make the internet useful and try to force them onto artificial intelligence implementations. I don’t think we’re at a place where standardization is our friend yet. The one place where standardization might help a lot for generative AI and agentic AI would be standard ways, semantic ways of interacting between these models. Humans have enough trouble with speaking to each other in language, which turns out to be ambiguous. I do worry about agents using natural language as a way of communicating and running into the same problem that humans have, which is ambiguity and confusion and possibly bad outcomes. Generally speaking, if we’re going to ask ourselves whether we should regulate AI in some way, I would suggest, at least in the early days, that we look at applications and the risks that they pose for the users. And so the focus of attention should be on safety and for those who are providing AI applications to show that they have protected the users from potential hazard. I will also, I feel strongly that there is a subtle risk in our use of generative AI. Those of you who know how these things work know that the large language models essentially compress large quantities of text into a complex statistical model. The consequence of that is that some details often get lost. get lost. We have a subtle risk in our use of large language models where we may lose specific details even though the generative output looks extremely convincing and persuasive because of the way it’s produced. I wonder whether we will end up with blurry details as a result of filtering our access to knowledge through these large language models. I would worry about that. I guess the last thing I would say is that accountability is as important in the AI world as I think it is in the internet world. We need to hold parties accountable for potentially hazardous behavior. The same is true for parties offering AI-based applications. Everybody should be accountable for any risks that these systems pose. My guess also is that we should introduce into the training material better provenance so that we know where the material came from. This would be beneficial for websites in general, knowing where the content came from, so that we can assess its utility and accuracy. I’ll stop there and thank you for your opportunity to intervene.


Olivier Crépin-Leblond: Thank you very much, Vint. Since we are a bit pressed for time, we’ll go straight over to Yik-Chan Shin while we’re working out with Sandra how to get her mic working. Yik-Chan, you have the floor.


Yik Chan Chin: Thank you for inviting me. From the P&I perspective, because as you may know that we have a policy at the IGF, we have an intersection called the policy level on artificial intelligence. So we did a report on the interoperability, liability, sustainability, and labor issues. So first of all, I think Well, I do agree with Wing-Chief in terms of the infrastructure of AI. Actually, it’s quite different from the internet because actually, AI system and the users are supported and connected via the internet, but they’re quite different because AI, including algorithms, data, and the computing power, technically, none of them must be unified or standardized, I mean, technically, okay? Also, according to our past experience, the interoperability of AI is an extremely complex issue, so we have been working on the topic for two years, the last two years, but it’s really extremely difficult issues. So, before I go into detail about the interoperability, I think there’s two things, two principles worth paying special attention to, because when I look at the question, you talk about permissive innovation versus precautionary principle, so when we talk about open internet, so basically, we talk about permissive innovations, and I’m not sure whether this principle should apply, should be applied to the AI regulation because AI is extremely compressed, and there’s some features, you need the features of AI, for example, compressive, and predictable, and also, it’s kind of autonomous behave, so all these particular features make the AI quite harmful if there’s a risk, so whether we should allow this permissive innovation approach, which is applied to the open internet because we know the history of internet is in the beginning self-governed, but should we allow this to apply to the AI system? I think we should be very cautious about that, but on the other hand, we do see there’s some overlapping principles. between the AI regulation, internet regulation, for example, so I think those values should be applicable to both systems, for example, like a human-centered system, okay, both AI and internet, inclusively, universality, transparency, accountability, robustness, and the safety and naturalities, and the capacity building, so all these values actually applicable to both systems. So in terms of the interoperability, which is my area, so I would like to say something particularly focused on interoperability. So first of all, what is the interoperability? Interoperability, basically, we’re talking about the capacity of the AI system in terms of machine, machine, they can talk to each other, communicate with each other smoothly, so including not only machine but also the regulatory policy, you know, so they can communicate and work together smoothly, but this doesn’t mean we have to, I mean, the regulation or the standard has to be harmonized, because we can have a different mechanism to accommodate interoperability, for example, we can have a compatibility mechanisms to accommodate the interoperability. So therefore, you know, first of all, I think interoperability are crucial for the openness of the internet and AI system, but AI system can be diverged as well as convergent, as I said, as I just explained before, because the system is quite different, it’s not necessarily to be unified, so therefore, first of all, we need to figure out what area of AI system or even AI regulations or governance has to be interoperable, what areas can be allowed divergent, you know, and respect the regional diversity, so from the pin… So, in our report, we identified several areas which could be addressed at a global level, which need to have the interoperable framework. For example, like the risk, the AI model’s risk categorization and the evaluation, because we see there’s a different approach to categorize the AI risk and evaluate. So you have a EU approach, and China has a Chinese approach. So even America, the US just released their global standardization framework mechanisms. So we need to have a kind of interoperable framework in terms of AI risk categorization and evaluation. And the second is about liability. So we have a huge debate just over there about the liability of AI system, and who should take responsibility, and what kind of responsibilities, criminal or civil. So we haven’t had a global framework, even a national framework, because there’s still debate at the EU level and the national level. So the liability of the AI model is another area that could be addressed at the global level. The third one, I think we just mentioned about the training data set. So all these issues can be addressed at the global level. And so the other things, and the last thing I want to mention is about how do we balance the regional diversity and the harmonization needs. So we need to respect the regional diversity in AI governments, but at the same time, can establish compatibility mechanisms to reconcile the divergence in the regulations. So there’s different mechanisms we can use, but there’s a context dependent, and so case by case. I think, is my time up? So last things I want to say is about the area we had to improve is a weak regime capacity of international institutions. you coordinate the alliance. So that’s a kind of concept we call the weak regime capacity of the international institutions, which means we have a lot of the international institutes like ITU, IEEE, and the UN, but how can we?


Luca Belli: Can I ask you just to wrap up in one minute so that the others also have time to speak? Thank you.


Yik Chan Chin: Yeah, so the last thing is that we need to have some kind of the global institution which can coordinate different initiatives at the national level, regional level, and the international level in terms of the openness of the AI and the openness of the internet. So the GDC have not provided a concrete solution in terms of how to strengthen the multi-stakeholder in the AI governance, but leave to the WISC-20 to decide. So I think we should address this in the WISC-20 debate. I think I stop there, thank you.


Luca Belli: Fantastic, let’s try to see if now Sandra can be audible. Sandra, can you try to speak so that we can check? We are not hearing you, can you try again?


Sandra Mahannan: How about now?


Luca Belli: Yes, now, yes, perfect. Keep the mic, close your mouth, please. Thank you very much. Thank you.


Sandra Mahannan: Thank you. Sorry for the whole mix up, and once again, thank you so much for the opportunity to be here.


Luca Belli: I’ll try to speak very short. If you can keep the mic very close to your mouth because literally we can hear very well when it’s close to your mouth, and not at all when it is five centimeters from your mouth. Thank you very much. We cannot hear you, Sandra, I’m sorry.


Sandra Mahannan: What’s the issue?


Luca Belli: Sandra, unfortunately, we keep on not being, I think it’s a mic problem. So, if you have another microphone where you are, I suggest you try to change it while we go to the next speaker, Alejandro Pisanti, because we are not able to hear you in this moment, Sandra. So, Alejandro Pisanti, very old friend, not because he is old at all, but because we know each other since a lot of years. So, please, Alejandro, the floor is yours.


Alejandro Pisanty: Thank you. Can you hear me well?


Luca Belli: Yes, very well.


Alejandro Pisanty: Thank you. Thank you, Luca and Olivia, for the yeoman’s work you did to put this session together, and to all members of the Dynamic Coalition for the Exchange of Ideas. I salute also friends I see on screen. I think I see Martin Botterman, I think I see Edesire Misolewicz and Harald Lalvestand, probably getting that right. So, briefly, because I’m going to be mostly responding as well as putting forward what I have prepared, the Dynamic Coalition was created to see, try to follow on these core values, which if you take them away, you don’t have the Internet anymore. If you take away openness, for example, you have an Internet. If you take away interoperability, you have a single vendor network and so forth. And that’s what we are trying to now extend to, or I say to challenge how much we can extend it to AI. We have to be very careful what we call AI. In people’s mind are the generative AI systems that start from text and can give you either more text in a conversational interface, or can give you images, video and audio. But artificial intelligence is a lot more things. It’s molecular modeling, it’s weather forecasting, it’s every use of artificial intelligence that we use for basically three purposes, which is finding patterns in otherwise apparently chaotic information. finding exceptions to information that appears to be completely patterned, and extrapolating from these. And we know that extrapolating from algebra, extrapolating from things that you only calibrate for interpolation, is always going to be risky. So that’s, you know, our basic explanation and concern for hallucinations in LLM systems. We have, second, one of the lessons we’ve learned for many years from this dynamic coalition, is to separate the effects of the human and the technology. To separate the effects of human agency, human intention. Cybercrime doesn’t happen and wasn’t invented by the internet. It happens because people want to take your money or want to hurt you in some ways, and now use the internet as they previously used, fax, post, or just try to cheat you face to face. Same for many other undesirable conducts. So we have to separate what people are wanting to do and how technology modifies it, for example, by amplifying it, by enabling anonymity, by crossing borders, and so forth. And same for AI. It’s not AI that is doing misinformation. We have had misinformation, I think, since at least the Babylonians, and probably even before we had written language. But now we have very subtle and easy to apply ways to apply misinformation at a large scale. But we still have to look at the source and the intention of the people who are creating and providing this misinformation, and not try to regulate the technology instead of regulating the behavior, or educating the users to avoid them falling for misinformation. Second large point here, third large point here, is not trying to regulate artificial intelligence in general, in total, but being sure that you are not, by trying to regulate what you don’t like about LLMs doing misinformation, you don’t kill them. your country’s ability to join the molecular modeling revolution for pharmaceutics, for example. There’s a recent paper by Akash Kapoor, which I think is very valuable for this session, which speaks of leaving behind the concept of digital sovereignty and replacing it with a digital agency. Luca and I were in a meeting two weeks ago in New America, in D.C., where this concept was put forward, and it’s a very powerful one, because what I extract from it is instead of trying to be sovereign by closing borders and putting tons of rules, which are basically copied from rules from the countries which actually developed the technology, and based on the fears, it’s trying to be powerful, even if you have to sacrifice some sovereignty in the sense that you have to collaborate with other countries, you have to collaborate with another academic institution and so forth, which, by the way, has always been the way of developing technology and academic research. There’s a recent French paper, it came to my attention only yesterday, which speaks about de-demonizing, stop demonizing artificial intelligence, without, of course, becoming confident or overconfident, but try to regulate and to promote AI. If your country is looking, legislators are looking to regulate AI and are not putting a lot of money into research and development and into, let’s say, like Denmark or Japan have done recently, or Italy, putting together a major computing facility for everybody to use to develop AI, they are lying to you, they are cheating you, because they are actually closing the door to the effects of innovation and condemning you to actually getting this only from outside the country in the end, in subtle and uncontrolled ways. How do we bring multi-stakeholder governance, which is another lesson from our dynamic coalition to artificial intelligence, we have to find a way. maybe to scare the companies with the fear of harder regulation, to come together with other stakeholders like academia, like the users, like rights holding organizations, and so forth, as we did with, for example, the domain name market with ICANN 25 years ago. It’s not doing an ICANN again, necessarily, but it’s extracting the lessons of how you bring these very diverse stakeholders together to a kind of core regulation designed for this type of system and risks that are present in reality, and not only the imaginary ones. There has also been some talk about open sourcing, which is very valuable. The risks have already been mentioned, and one risk that has not been mentioned that we’ll learn from this history of open source software is derelicts, software that is abandoned, systems that are abandoned, that are not maintained anymore, which are very risky because defects can creep in and never be fixed. And then these things become part of the infrastructure of the internet. We already have seen some major security events, for example, happening from unmaintained open source software, which was at the core of different systems. So the challenge here will be to avoid the delusion of a one world government. We don’t need the GDC. We don’t need a UN artificial intelligence agency. We need to look more at the federated approach. And I think that this will be more approachable, more available. There’s a better path to it. If we do, as for example, the UK has been doing, go by the verticals, go by the center specific type of regulation, which we already have, use all the tools society already has, like liability for commercial products, liability for public officials who purchase systems which work badly. It’s as bad to purchase a system that does biased or discriminatory assignment of funds in a social security system, as it’s bad to purchase cars that end up killing people in crashes because you don’t have airbags and that would be


Luca Belli: thank you fantastic thank you very much alejandro i was going to remind you to wrap up but you already did it yourself fantastic so uh let’s try to see if now sandra has a new mic that can work let’s try to see give a last shot at sandra’s presentation sandra can you hear us yes can we can hear you we can hear you very well very well excellent please go ahead thank you um


Sandra Mahannan: i’m so sorry about the mix up with my mic and all um i’m going to try to keep this very short um so i would want to come in from the angle of um the business angle so to speak so um i work with unicorn group of companies and it’s an ai and robotics company so um i read one time that ai often reflects the biases of its creators right so um we all know that um ai response quality is a very huge concern it’s a huge concern um because we have cultural biases religious biases um recently i was in in a religious gathering where religious leaders were trying to discuss um uh the adoption of ai and the you know um the response the concerning responses that ai gives um the erroneous responses and all of that and we all know that um air responses are heavily dependent on the data quality fed into um the model right and um uh yeah so um the the the the acquisition of such data is usually not um not cheap it’s very expensive we talk about computing power we talk about um acquisition of data these are very expensive processes. So my tip would be to regulate AI, not really from the user angle, but from the openness should come from the aspect of the developer, the development angle, where we talk about data quality, data privacy, security, data sharing protocols, operating in the market as an entity, interoperability, and yeah, and all of that. Yeah, that would be my take on it. Thank you.


Luca Belli: Fantastic. Thank you very much, Sandra, for your perspective and for being so fast. Do we want to have our last speaker last but not least, of course,


Olivier Crépin-Leblond: I think we’ll have a Wanda Munoz, who is a member of the feminist AI research network in UNESCO’s Women into Ethical AI platform. So over to you, Wanda.


Wanda Muñoz: Thank you so much. Can you hear me?


Olivier Crépin-Leblond: Absolutely.


Wanda Muñoz: Okay, okay. Okay. Well, thank you so much. I’m delighted to be here. Thanks for the to the organizers for having me and thanks Alejandro for recommending me to be here. I would like to take a somewhat different perspective from what has been shared so far, because what I like to put on the table today is my perspective as someone who comes from policymaking and from human rights implementation. So my contributions come from this perspective. And I will also build on the results of a report from the Global Partnership on AI called Towards Substantive Equality in AI that I invite you all to review and for which we counted with the amazing leadership of Anita. So I take the opportunity to thank her again for her contributions and to say that I fully agree with her intervention. So I will start by sharing a few thoughts on the issue of values itself and then I will move to human rights. First, I like to share that I think the core values of Internet governance have been very useful to have a common understanding of the Internet that we want and that serves the majority, but arriving to the discussion when these values were already adopted and implemented for a while, I want to put it on the table that maybe we could benefit from analyzing these values from a gender and diversity perspective. And I think there’s already a wealth of research from feminist AI scholarship in this regard. For instance, just to mention an example of the six core principles of feminist data visualization, I don’t know if you are familiar with it, but I invite you to look for them, which propose values such as rethinking binaries, embracing pluralism, examining power and aspiring to empowerment. And these are quite different from the Internet core set of values today, but also complementary. And I think what I like from this other set of values is that they question social constructs, power and distribution of resources. And these issues to me are inextricably linked both to Internet and to AI governance, but still they are often left out of mainstream discussions. So that being said, I’ll move to human rights. And here what I like to say first is that I like to give you some, a couple of ideas of how a few of us insist that human rights should be at the front and center of any discussion on AI governance, at least on the same standing and ethics, principles and values. I think maybe for some of you see it differently, but human rights are not just words. Human rights are actions, policies, budgets, indicators and accountability mechanisms, which were already mentioned by Renata and Anita before. So in the context of artificial intelligence, human rights allow us to reframe the discussion on AI in different terms and to ask different questions. So let me give you three examples. Although saying that we must mitigate the risk of AI, what we would say from a human rights perspective is that when AI harms occurs, it systematically results in violation. of human rights that disproportionately affect women, racialized persons, indigenous groups, and migrants, among others. And I’m sure you know of the many docents or more documented examples of these that have affected the right to employment, to health insurance, to social services, and many others, which you can find, for instance, in the OECD AI Incidents Monitor. Another example, instead of saying that in AI governance, we should balance risk and innovation, if we acknowledge that the benefits of innovation generally benefit a privileged few, and the brunt of the harm is primarily for those already marginalized, from a human rights perspective, we would talk about the need for AI regulation to ensure accountability, remedy, and reparation when human, when violations of human rights results from the use of AI. And I want to tell you that of this research we carried out for GPA, where we consulted more than 200 people from all walks of life, backgrounds in five continents. This is possibly the number one demand that was documented in the report. And I also want to say I appreciated Jake’s perspective on the need for international norms, specifically regarding liability and accountability. Another example is regarding non-discrimination, that I think generally speaking, people understand non-discrimination as saying, I don’t go out and say slurs to people in the street, right? So I don’t discriminate. But from a human rights perspective, this is far from enough. What non-discrimination means is that you must take positive actions to avoid and to redress discrimination that already systematically exists in our organizations, in our data, in our policies. And this is particularly the case in internet and in artificial intelligence. So in a human rights framework, unless we take action, we are effectively perpetuating discrimination. Similarly, we could have a discussion about what a general notion of safety means, but unless we adopt specific actions to ensure the safety. In the context of the vulnerabilities of specific groups, in each specific context, we will keep excluding those who are already more marginalized. And here, Alejandro, as often, I want to respectfully disagree with you when you talk about the need not to demonize technology, because I hear this once and again, and I really don’t think it’s a helpful term that is often thrown at those of us who are pointing out the risks and harms of AI. So I think we are doing this from the documented impacts and from the evidence, and trying to raise the alarm to at least start bringing into reality, you know, into the reality of what AI is causing, in addition, I mean, to contrast with what we see most of the time, which is this AI hype. And of course, I think that for all the problems that the UN has, we don’t need it to lead an effort on AI regulation, if we want to have at least some equality in terms of negotiation. This leads to larger issues with multilateralism that I hope we could discuss another time. So just to conclude, I think when we speak about AI governance, what is at stake has the potential to change the core of how our societies function. So I fully agree with Anita on the need for a societal and collective rights approach, in addition to a human rights one. And to me, this cannot happen without regulation. So thank you.


Luca Belli: Fantastic. Thank you very much, Vanda, for bringing these really very good and intense, thought-provoking points. I think this is a very good way to open our discussion now with the floor. We have a good 20 minutes to speak. Also, let me share with the floor that one of the intentions that we had when we started designing the organization of this session was to try to distill some core elements maybe that we could put into a report, a joint report for next IGF. We know very well that in six months, before next IGF, there will be very few things that we could do in terms of outcome, but some very joint paper on what could be the elements for a… in an open AI ecosystem or something like that. So if you have any ideas, if you can help us, guide us, identify what could be these core elements, or if you have any other reflections on what has been said, we have 20 good minutes to discuss this. Feel free to be punchy, provocative, while of course diplomatic and respectful. Just raise your hand and our mic will be.


Olivier Crépin-Leblond: And Luca, if I could add, there are sometimes some panels at IGFs where everyone agrees with each other. And I was really pleased to see various viewpoints and some panelists not agreeing with each other. So that’s really good. And by the way, if you all, as panelists also have points to make about each other’s interventions and please go ahead. Now, if you’re online, you can put your hand up in the Zoom and we’ll be seeing this. And of course, if you’re in the room, then put your hand up and mic will fly in your direction or maybe be brought over to your direction. Does anyone who wish to fire off?


Luca Belli: Who wants to start our collective exercise?


Olivier Crépin-Leblond: It’s a lot to digest.


Luca Belli: Yes, I see. Are you, Desiree, are you stretching or raising your hand? Okay, let’s break the silence with Desiree.


Audience: Hi, Desiree. The question, yeah, thank you all for your very rich intervention. There’s a lot to take in. And what I, although I don’t know the title of the session, whether really focusing on the core principles of AI, this is a dynamic coalition working group on core principles of the internet. So I’m really, really glad to see the differentiation that the AI is not the internet confirmed by some of our panelists. And then we also had heard that the AI is building this intermediary layer like a user interface between these structures. So I think it’s important to see the AI just as something being built on top of the existing infrastructure. And my concern is really that we will end up with the internet that is even fuller of deep fakes and disinformation at this current stage. And that trying to have a sustainable internet where we need to be really. careful about the capacity that we have in the society in running the Internet and getting bits of information through the network, should the layers of the network, you know, be looked separate and protected. And what I think I’m hearing, and I’d like to have a confirmation, is that AI being built on the top should really be regulated as the AI layer and like not to go deep down into the Internet infrastructure as such. But then there are arguments that some networking parts will be using AI as well. And so how do we see this, you know, regulation being played out? And what is the core principle here? Is it still net neutrality, that we’re stupid about the bits that go through the network? It just raised a lot of, you know, questions in my mind.


Luca Belli: Yeah, thank you, Desiree. We have Anita online who wants to react. And then we’ve also got Vint Cerf having put his hand up. So let’s take Anita and Vint’s reactions and then we go around the floor again.


Anita Gurumurthy: I must apologize that I wasn’t responding to the point from the floor because I wanted to come in earlier. So is that okay that I go in now?


Luca Belli: Yes, please go ahead. And then we have Vint and then Renate.


Anita Gurumurthy: Yeah, it’s a minor point. It may be linked to what was just observed. I think that when you talk about the Internet and the innumerable struggles in our own regulatory landscape, and I recall my organization, and it’s, you know, the good fight we put up for net neutrality in relation to our telecommunications authority, the way the idea of non-discrimination in hers in the network is very, very different, I think, when it comes to artificial intelligence. I think AI is primarily linked to truth conditions of a society, and you’re really not necessarily prioritizing non-discrimination. I think that’s somewhat of a technicalized representation of data and AI debates. And what we’re actually doing is using discrimination and social cognition in a manner that you can use data for social transformation. So there is a certain slippage there very often, and in fact, in our joint work with Wanda, we actually said that we might sometimes have to do affirmative action through data. So we really have to be cautious about conflating non-discrimination on the internet with principles for responsible AI.


Olivier Crépin-Leblond: Thank you. Vint?


Vint Cerf: I’m literally just thinking on the fly here about AI as another layer, as the interface into this vast information space we call the internet. First of all, Alejandro’s point that machine learning, it covers a great deal more than large language models, is mentioned of weather prediction, for example, resonates with me because we recently discovered at Google that we can do a better job predicting weather using machine learning models rather than using Navier-Stokes equations. But I think that we should be thoughtful about the role that machine learning and large language models might play. One possibility is that they filter information in a way that gives us less value. That would be terrible. But another alternative is that it helps us ask better questions of the search engines than we can compose ourselves. We have a little experience with this through the use of what’s called a knowledge graph that helps expand queries into the index of the World Wide Web and then pull data back. Summarization could lose information, that’s a potential hazard, but I think we should be careful not to discard the utility that these large language models might have in improving our ability to ingest, analyze, and summarize information. So, this is an enormous canvas, which is mostly blank right now, and we’re going to be exploring, I’m sure, for the next several decades.


Olivier Crépin-Leblond: Renata?


Renata Mielli: Just a point. Of course, AI and the Internet are not the same thing, they are different. But, in my point of view, we have some challenges that we are facing in regards to how to address the risks of AI and the impacts on society are pretty much the same in terms of the need for more transparency, accountability, diversity, more decentralized and democratic. Not only Internet, but AI. And we need to focus also on how AI is impacting the Internet and how people interact with the Internet. Now, we are in a situation that, for example, when you do a search on Google, you don’t have a lot of links to click and interact with the content of the link about something. How to cook a cake, orange cake, for example. Because the artificial intelligence brings the results and you don’t need to click anymore. And, in a lot of times… the results are not accuracy and have bias, and this is impacting the internet, how we experience it. So there is not the same thing, but one impact other, and we have to have in mind that the core values we need to have to regulate the AI in this actually moment needs to be into account this transparency, this accountability, liability, and so on, that we need neutrality and other core values that we have to internet.


Yik Chan Chin: Yes, I think in terms of internet governance, we have these core infrastructures, which we all recognize it has to be public good, even global public good. So this is no problem. So we are continually regulate separately as a core infrastructures, but AI system, I think is more on the application of life, but there’s an issue I think many people already touched, especially in terms, which is cybersecurity, because the AI actually put a lot of the, because it caused a lot of the problem in terms of cybersecurity, because it make the internet more vulnerable towards the cyber attacks. So that is one areas, just like my colleague said, they have a mutual impact, especially in terms of cybersecurity, in terms of the AI may help enlarge the dangerous harms towards the internet stability. I think that is one area we have to put focus, but there’s other impact, which needs to take a long observation on the impact of the AI. on the internet and the core infrastructure of the internet.


Luca Belli: Let’s get to Sandra and unless there is anyone else with an urgent comment or question, we can then wrap up. Sandra, please. Can you speak again? Yes, very well.


Sandra Mahannan: I just wanted to quickly react to what I think two speakers ago, she made mention of a concern about having erroneous responses because some way somehow AI just summarizes the feedback from searches, which was one of, I think I totally agree with her, which was one of the points I made earlier that these biases whether we like it or not, AI is here to stay. Then these biases are really concerning because we would agree that there are really big wigs in the business and they get to I don’t want to say control the narrative, but for lack of a better way of expressing it. Then the small players, no matter how accurate they are, don’t get to really, I don’t know access is really low because the big wigs have occupied the market, which means that responses automatically or people automatically go there and then what happens when the decentralization is not really happening, when it’s not really decentralized, when people are not really getting to, as other players in the industry are not really getting to People are not really having access to those other players, and that is why it is really important, I think, that regulation should really come heavy on the side of the development or the developers or the models themselves.


Luca Belli: I think that now, as we will be kicked out of the room in two minutes, it’s time to wrap up and to thank the participants for their very thought-provoking comments and presentations. I think we have illustrated very well the complexity of this issue and also the interest of then trying to keep on having this very productive joint venture for next IGF to present the result of what could be a very brief report on the elements that can enable an open AI environment, as it was suggested also in the chat during the session, our best effort to distill the knowledge shared of an hour and a half. I think the mics are giving up on us. Try this one. It’s a sign that we have to wrap up. Thank you very much, everyone, and we will do our best effort to consolidate everything we have learned today into this report. Thank you very much.


Olivier Crépin-Leblond: And I should just add, if anybody is interested in joining both the Anime Coalition of Network Neutrality and the one on Co-Internet Values, then come to talk to us and we’ll take your email address and name and we’ll be very happy to have you on. And thanks again to all of our panelists and really great job. So thank you so much.


Alejandro Pisanty: Thank you again and congratulations for the session.


Wanda Muñoz: Thank you.


V

Vint Cerf

Speech speed

139 words per minute

Speech length

790 words

Speech time

338 seconds

AI and Internet are fundamentally different technologies requiring distinct governance approaches

Explanation

Vint Cerf emphasizes that AI and the Internet are not the same thing and should not be governed in the same way. He suggests that the standardization which has made the Internet useful may not be applicable to artificial intelligence, at least not yet.


Evidence

Cerf points out that AI systems are extremely complex and large models that operate very differently from each other, with proprietary training data and architectures.


Major Discussion Point

Differences and similarities between Internet and AI governance


Agreed with

Luca Belli


Renata Mielli


Agreed on

AI and Internet are distinct technologies requiring different governance approaches


Differed with

Luca Belli


Renata Mielli


Differed on

Applicability of Internet governance principles to AI


AI systems are mostly proprietary and not interoperable, unlike Internet protocols

Explanation

Vint Cerf highlights that AI systems, unlike Internet protocols, are largely proprietary and lack interoperability. He points out that the intellectual property in AI systems lies in their training data, architecture, and weights, which are often kept secret.


Major Discussion Point

Openness and interoperability in AI systems


Differed with

Anita Gurumurthy


Differed on

Openness in AI systems


AI governance should focus on regulating applications and risks, not the technology itself

Explanation

Vint Cerf suggests that AI governance should concentrate on regulating specific applications and their associated risks, rather than attempting to regulate AI technology as a whole. He emphasizes the importance of focusing on safety and protecting users from potential hazards.


Major Discussion Point

Regulation and governance approaches for AI


L

Luca Belli

Speech speed

144 words per minute

Speech length

2057 words

Speech time

854 seconds

Some core Internet values like transparency and accountability can apply to AI governance

Explanation

Luca Belli suggests that certain principles from Internet governance, such as transparency and accountability, could be relevant to AI governance. He argues that these principles are essential for understanding how AI systems function and for ensuring accountability to users, regulators, and society.


Evidence

Belli gives examples of how transparency is crucial in both Internet traffic management and AI decision-making processes, such as in credit scoring or facial recognition systems.


Major Discussion Point

Differences and similarities between Internet and AI governance


Agreed with

Vint Cerf


Renata Mielli


Agreed on

AI and Internet are distinct technologies requiring different governance approaches


Differed with

Vint Cerf


Renata Mielli


Differed on

Applicability of Internet governance principles to AI


U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

AI is building an intermediary layer on top of Internet infrastructure

Explanation

The speaker suggests that AI is creating a new layer between users and Internet content. This intermediary layer is becoming increasingly unavoidable and could significantly change how users access and interact with online information.


Evidence

The speaker cites a Gartner study predicting that search engine traffic could decline by 25% by 2026 due to the rise of AI chatbots.


Major Discussion Point

Differences and similarities between Internet and AI governance


Agreed with

Sandrine Elmi Hersi


Agreed on

AI is creating a new layer between users and Internet content


R

Renata Mielli

Speech speed

116 words per minute

Speech length

1058 words

Speech time

544 seconds

AI and Internet governance face similar challenges around transparency, accountability and decentralization

Explanation

Renata Mielli argues that while AI and the Internet are different, they face similar governance challenges. She emphasizes the need for transparency, accountability, and decentralization in both domains.


Evidence

Mielli gives an example of how AI is impacting Internet search results, where users now often get direct answers from AI instead of links to various sources, potentially reducing transparency and diversity of information.


Major Discussion Point

Differences and similarities between Internet and AI governance


Agreed with

Vint Cerf


Luca Belli


Agreed on

AI and Internet are distinct technologies requiring different governance approaches


Differed with

Vint Cerf


Luca Belli


Differed on

Applicability of Internet governance principles to AI


A

Anita Gurumurthy

Speech speed

144 words per minute

Speech length

1108 words

Speech time

460 seconds

Openness in AI is complex and doesn’t necessarily lead to transparency or democratization

Explanation

Anita Gurumurthy argues that the concept of openness in AI is not straightforward and does not automatically result in transparency or democratization. She suggests that openness in AI can have a wide range of meanings and implementations, not all of which contribute to scrutability or accessibility.


Evidence

Gurumurthy cites the example of GPT-4, where OpenAI declined to release details about its architecture, training data, and methods, despite being labeled as ‘open’.


Major Discussion Point

Openness and interoperability in AI systems


Differed with

Vint Cerf


Differed on

Openness in AI systems


Y

Yik Chan Chin

Speech speed

138 words per minute

Speech length

1174 words

Speech time

510 seconds

Standardization and interoperability of AI systems is extremely difficult currently

Explanation

Yik Chan Chin points out that achieving standardization and interoperability in AI systems is currently very challenging. This is due to the complex and diverse nature of AI technologies and applications.


Major Discussion Point

Openness and interoperability in AI systems


Global coordination is needed on AI risk categorization, liability frameworks, and training data standards

Explanation

Yik Chan Chin argues for the need for global coordination in several key areas of AI governance. This includes developing consistent approaches to categorizing AI risks, establishing liability frameworks, and setting standards for training data.


Major Discussion Point

Regulation and governance approaches for AI


AI poses new cybersecurity risks to Internet infrastructure

Explanation

Yik Chan Chin points out that AI technologies introduce new cybersecurity challenges to Internet infrastructure. She argues that AI can make the Internet more vulnerable to cyber attacks.


Major Discussion Point

Impacts and risks of AI systems


W

Wanda Muñoz

Speech speed

173 words per minute

Speech length

1106 words

Speech time

383 seconds

A human rights-based approach is needed for AI governance, beyond just ethics and principles

Explanation

Wanda Muñoz argues for a human rights-based approach to AI governance, going beyond ethics and principles. She emphasizes that human rights provide a framework for concrete actions, policies, budgets, indicators, and accountability mechanisms.


Evidence

Muñoz gives examples of how a human rights perspective can reframe AI governance discussions, such as focusing on systematic human rights violations resulting from AI harms and the need for accountability, remedy, and reparation.


Major Discussion Point

Regulation and governance approaches for AI


AI systems can perpetuate and amplify existing societal biases and discrimination

Explanation

Wanda Muñoz points out that AI systems can reinforce and exacerbate existing societal biases and discrimination. She argues that unless specific actions are taken, AI will continue to perpetuate these issues.


Evidence

Muñoz mentions documented examples of AI harms disproportionately affecting women, racialized persons, indigenous groups, and migrants in areas such as employment, health insurance, and social services.


Major Discussion Point

Impacts and risks of AI systems


S

Sandra Mahannan

Speech speed

124 words per minute

Speech length

517 words

Speech time

249 seconds

Regulation should focus more on AI developers and models rather than users

Explanation

Sandra Mahannan suggests that AI regulation should primarily target developers and models rather than users. She argues that this approach would be more effective in addressing issues such as data quality, privacy, security, and interoperability.


Major Discussion Point

Regulation and governance approaches for AI


S

Sandrine Elmi Hersi

Speech speed

111 words per minute

Speech length

843 words

Speech time

451 seconds

AI could restrict user agency and transparency in accessing online information

Explanation

Sandrine Elmi Hersi expresses concern that AI systems, particularly generative AI, could limit users’ ability to access and control the content they see online. She suggests that AI is becoming an unavoidable intermediary layer between users and Internet content.


Evidence

Hersi cites a Gartner study predicting a 25% decline in search engine traffic by 2026 due to the rise of AI chatbots.


Major Discussion Point

Impacts and risks of AI systems


Agreed with

Unknown speaker


Agreed on

AI is creating a new layer between users and Internet content


A

Alejandro Pisanty

Speech speed

154 words per minute

Speech length

1191 words

Speech time

461 seconds

Generative AI could lead to loss of information detail and accuracy

Explanation

Alejandro Pisanty expresses concern that generative AI, particularly large language models, might result in the loss of specific details and accuracy in information. He suggests that the compression of large amounts of text into statistical models could lead to the omission of important details.


Major Discussion Point

Impacts and risks of AI systems


Agreements

Agreement Points

AI and Internet are distinct technologies requiring different governance approaches

speakers

Vint Cerf


Luca Belli


Renata Mielli


arguments

AI and Internet are fundamentally different technologies requiring distinct governance approaches


Some core Internet values like transparency and accountability can apply to AI governance


AI and Internet governance face similar challenges around transparency, accountability and decentralization


summary

While AI and the Internet are distinct technologies, there are some shared governance challenges and principles that can be applied to both, particularly around transparency and accountability.


AI is creating a new layer between users and Internet content

speakers

Unknown speaker


Sandrine Elmi Hersi


arguments

AI is building an intermediary layer on top of Internet infrastructure


AI could restrict user agency and transparency in accessing online information


summary

AI is becoming an intermediary layer between users and Internet content, which could significantly change how users access and interact with online information.


Similar Viewpoints

The concept of openness in AI is not straightforward and does not automatically result in transparency or interoperability, unlike in Internet protocols.

speakers

Anita Gurumurthy


Vint Cerf


arguments

Openness in AI is complex and doesn’t necessarily lead to transparency or democratization


AI systems are mostly proprietary and not interoperable, unlike Internet protocols


AI regulation should focus on specific applications, risks, and developers rather than attempting to regulate the technology as a whole or targeting users.

speakers

Vint Cerf


Sandra Mahannan


arguments

AI governance should focus on regulating applications and risks, not the technology itself


Regulation should focus more on AI developers and models rather than users


Unexpected Consensus

Need for global coordination in AI governance

speakers

Yik Chan Chin


Wanda Muñoz


arguments

Global coordination is needed on AI risk categorization, liability frameworks, and training data standards


A human rights-based approach is needed for AI governance, beyond just ethics and principles


explanation

Despite coming from different perspectives (technical and human rights), both speakers emphasize the need for global coordination in AI governance, suggesting a broader consensus on the international nature of AI challenges.


Overall Assessment

Summary

The main areas of agreement include recognizing AI and the Internet as distinct technologies with some shared governance challenges, acknowledging AI’s role as a new intermediary layer in accessing online content, and the need for focused regulation on AI applications and developers.


Consensus level

There is a moderate level of consensus among the speakers on the fundamental challenges and approaches to AI governance. However, there are varying perspectives on the specific methods and focus areas for regulation. This suggests that while there is a shared understanding of the importance of AI governance, there is still a need for further discussion and refinement of specific governance strategies.


Differences

Different Viewpoints

Applicability of Internet governance principles to AI

speakers

Vint Cerf


Luca Belli


Renata Mielli


arguments

AI and Internet are fundamentally different technologies requiring distinct governance approaches


Some core Internet values like transparency and accountability can apply to AI governance


AI and Internet governance face similar challenges around transparency, accountability and decentralization


summary

While Vint Cerf emphasizes the fundamental differences between AI and the Internet, suggesting distinct governance approaches, Luca Belli and Renata Mielli argue that some core Internet governance principles can be applied to AI governance.


Openness in AI systems

speakers

Anita Gurumurthy


Vint Cerf


arguments

Openness in AI is complex and doesn’t necessarily lead to transparency or democratization


AI systems are mostly proprietary and not interoperable, unlike Internet protocols


summary

Anita Gurumurthy argues that openness in AI is complex and doesn’t automatically lead to transparency, while Vint Cerf focuses on the proprietary nature of AI systems, highlighting their lack of interoperability compared to Internet protocols.


Unexpected Differences

Perception of AI risks

speakers

Vint Cerf


Wanda Muñoz


arguments

AI governance should focus on regulating applications and risks, not the technology itself


AI systems can perpetuate and amplify existing societal biases and discrimination


explanation

While Vint Cerf, as a technology pioneer, seems to take a more neutral stance on AI risks, focusing on application-specific regulation, Wanda Muñoz unexpectedly emphasizes the systemic risks of AI in perpetuating societal biases. This difference highlights the gap between technical and human rights perspectives on AI governance.


Overall Assessment

summary

The main areas of disagreement revolve around the applicability of Internet governance principles to AI, the nature of openness in AI systems, and the appropriate focus and approach for AI regulation.


difference_level

The level of disagreement among speakers is moderate to high. While there is some consensus on the need for AI governance, there are significant differences in perspectives on how to approach it. These differences reflect the complex and multifaceted nature of AI governance, involving technical, legal, and human rights considerations. The implications of these disagreements suggest that developing a unified approach to AI governance will be challenging and may require balancing multiple perspectives and priorities.


Partial Agreements

Partial Agreements

All three speakers agree on the need for AI regulation, but they differ in their approaches. Vint Cerf suggests focusing on applications and risks, Sandra Mahannan emphasizes regulating developers and models, while Wanda Muñoz advocates for a human rights-based approach.

speakers

Vint Cerf


Sandra Mahannan


Wanda Muñoz


arguments

AI governance should focus on regulating applications and risks, not the technology itself


Regulation should focus more on AI developers and models rather than users


A human rights-based approach is needed for AI governance, beyond just ethics and principles


Similar Viewpoints

The concept of openness in AI is not straightforward and does not automatically result in transparency or interoperability, unlike in Internet protocols.

speakers

Anita Gurumurthy


Vint Cerf


arguments

Openness in AI is complex and doesn’t necessarily lead to transparency or democratization


AI systems are mostly proprietary and not interoperable, unlike Internet protocols


AI regulation should focus on specific applications, risks, and developers rather than attempting to regulate the technology as a whole or targeting users.

speakers

Vint Cerf


Sandra Mahannan


arguments

AI governance should focus on regulating applications and risks, not the technology itself


Regulation should focus more on AI developers and models rather than users


Takeaways

Key Takeaways

AI and Internet are fundamentally different technologies requiring distinct governance approaches, though some core Internet values like transparency and accountability can apply to AI governance.


Openness in AI is complex and doesn’t necessarily lead to transparency or democratization. AI systems are mostly proprietary and not interoperable, unlike Internet protocols.


AI governance should focus on regulating applications and risks, with emphasis on human rights, developer accountability, and global coordination on key issues like risk categorization and liability.


AI poses new risks around restricted user agency, perpetuation of biases, cybersecurity vulnerabilities, and potential loss of information detail and accuracy.


Resolutions and Action Items

Develop a joint report for next IGF on elements that can enable an open AI environment


Continue collaboration between the Dynamic Coalition on Core Internet Values and Dynamic Coalition on Net Neutrality on AI governance issues


Unresolved Issues

How to balance innovation and risk mitigation in AI regulation


Extent to which Internet governance principles can or should be applied to AI governance


How to ensure AI systems enhance rather than restrict access to diverse online information


Approaches for global coordination on AI governance given differing national/regional priorities


Suggested Compromises

Focus AI regulation on applications and risks rather than the underlying technology


Adopt a human rights-based approach to AI governance while still allowing for innovation


Develop compatibility mechanisms to reconcile divergent regional AI regulations while respecting diversity


Thought Provoking Comments

We now have centralized data value creation by a handful of transnational platform companies, and we could actually have had, as Benkler pointed out long ago, different form of wealth creation.

speaker

Anita Gurumurthy


reason

This comment challenges the current paradigm of data and wealth concentration in the tech industry, suggesting there were alternative paths for more distributed value creation.


impact

It shifted the discussion to consider the economic implications and power dynamics of AI and internet governance, rather than just technical aspects.


AI and Internet are not the same thing. And I think that the standardization which has made the Internet so useful may not be applicable to artificial intelligence, at least not yet.

speaker

Vint Cerf


reason

This comment importantly distinguishes AI from the internet and questions whether internet governance principles can be directly applied to AI.


impact

It prompted participants to more carefully consider which internet governance principles may or may not be applicable to AI, rather than assuming direct transferability.


From a human rights perspective, we would talk about the need for AI regulation to ensure accountability, remedy, and reparation when human, when violations of human rights results from the use of AI.

speaker

Wanda Muñoz


reason

This comment reframes the discussion of AI governance in terms of human rights, emphasizing accountability and remediation.


impact

It broadened the conversation beyond technical and economic considerations to include a human rights perspective on AI governance.


Generative AI applications are becoming a new intermediary layer between users and Internet content, increasingly unavoidable.

speaker

Sandrine Elmi Hersi


reason

This insight highlights how AI is fundamentally changing how users interact with internet content.


impact

It prompted discussion about the implications of AI as a new layer of internet infrastructure and how this might require new approaches to governance.


Overall Assessment

These key comments shaped the discussion by broadening its scope beyond technical internet governance principles to include economic, human rights, and structural considerations specific to AI. They challenged assumptions about the direct applicability of internet governance to AI and prompted a more nuanced exploration of how AI governance might need to differ. The discussion evolved from comparing internet and AI governance to considering AI’s unique challenges and impacts on internet use and society more broadly.


Follow-up Questions

How can we balance regional diversity and harmonization needs in AI governance?

speaker

Yik Chan Chin


explanation

This is important to respect different regional approaches while still establishing compatible mechanisms for global AI governance.


How can we strengthen multi-stakeholder involvement in AI governance?

speaker

Yik Chan Chin


explanation

This is crucial for ensuring diverse perspectives are included in shaping AI policies and regulations.


How can we regulate AI from the developer angle, focusing on data quality, privacy, security, and interoperability?

speaker

Sandra Mahannan


explanation

This approach could address issues at the source of AI development rather than just regulating end-user interactions.


How can we incorporate feminist and diverse perspectives into core values for AI governance?

speaker

Wanda Muñoz


explanation

This could lead to more inclusive and equitable AI systems by questioning social constructs, power dynamics, and resource distribution.


How can we ensure accountability, remedy, and reparation when human rights violations result from AI use?

speaker

Wanda Muñoz


explanation

This is critical for addressing the disproportionate harm AI can cause to marginalized groups.


How can we develop international norms specifically regarding liability and accountability in AI?

speaker

Alejandro Pisanty


explanation

This is important for establishing consistent global standards for responsible AI development and use.


How can we separate the effects of human agency and intention from the technology itself in AI governance?

speaker

Alejandro Pisanty


explanation

This distinction is crucial for appropriately addressing issues like misinformation and cybercrime in the context of AI.


How can we regulate AI in specific verticals or sectors rather than attempting to create one-size-fits-all regulations?

speaker

Alejandro Pisanty


explanation

This approach could lead to more effective and tailored regulations for different AI applications.


How can we ensure transparency in AI systems, particularly in complex models like large language models?

speaker

Luca Belli


explanation

Transparency is essential for accountability and understanding how AI systems make decisions.


How can we address the potential loss of specific details in AI-generated content?

speaker

Vint Cerf


explanation

This is important to maintain the accuracy and richness of information as AI systems become more prevalent in content creation and summarization.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #153 Internet Governance and the Global Majority: What’s Next

WS #153 Internet Governance and the Global Majority: What’s Next

Session at a Glance

Summary

This discussion focused on strategies for improving global majority representation and participation in internet governance forums. Panelists from Nepal, Peru, and Uganda shared insights on challenges and opportunities for meaningful engagement. Key issues highlighted included the need for better access to internet infrastructure in developing regions, capacity building for diverse stakeholders, and more inclusive funding mechanisms to enable participation in global forums.


The panelists emphasized the importance of a bottom-up, multi-stakeholder approach to internet governance. They advocated for engaging youth, parliamentarians, and other underrepresented groups through targeted outreach and education. Successful examples were shared, such as youth-led national Internet Governance Forums and capacity-building programs for legislators.


Challenges discussed included power asymmetries, fragmented efforts among civil society groups, and difficulties in translating global discussions to local contexts. The panelists stressed the need for harmonized agendas, collaborative research, and leveraging existing networks to amplify global majority voices. They also highlighted the importance of localizing narratives and ensuring meaningful representation beyond token participation.


Recommendations for global partners included providing more inclusive funding opportunities, aligning existing resources more effectively, and ensuring global majority representatives have “seats at the table” in key discussions. The panelists concluded that while progress has been made, continued efforts are needed to create a truly inclusive and representative system of global internet governance.


Keypoints

Major discussion points:


– Challenges in achieving meaningful participation from global majority stakeholders in internet governance forums, including funding limitations, capacity gaps, and power imbalances


– The importance of localizing global internet governance issues and narratives to make them relevant at national/regional levels


– Strategies for engaging policymakers and parliamentarians on internet governance topics, such as targeted capacity building programs


– The need for better coordination and alignment of efforts among different stakeholders and initiatives


– Ways to improve multi-stakeholder collaboration, especially bringing in private sector and government voices


The overall purpose of the discussion was to explore how to maximize the impact and representation of global majority voices in key internet governance forums and processes, with a focus on practical strategies and lessons learned.


The tone of the discussion was constructive and solution-oriented. Panelists spoke candidly about challenges but maintained an optimistic outlook, offering concrete examples of successful approaches. The conversation had a collaborative feel, with panelists building on each other’s points. There was a sense of shared purpose in working to improve global internet governance processes.


Speakers

– Amara Shaker-Brown: Moderator


– Ananda Gautam: Youth leader focused on democratizing human rights in the digital age, advocate for inclusive technology policies, co-founder of Open Internet of Paul


– Paola Galvez: Tech policy consultant, founding director of IDON AI Lab, UNESCO’s lead AI national expert in Peru, team leader for the Center of AI and Digital Policy


– Peace Oliver Amuge: Africa Regional Strategy Lead with the Association for Comprehensive Communications, member of the UN Multistakeholder Advisory Group with the Internet Governance Forum


Additional speakers:


– Audience member: Academic researcher from Australia interested in youth IGF initiatives


Full session report

Revised Summary of Internet Governance Forum Discussion on Global Majority Participation


This discussion focused on strategies for improving global majority representation and participation in internet governance forums, with particular emphasis on the Internet Governance Forum (IGF), the WSIS+20 process, and the Global Digital Compact. Panellists from Nepal, Peru, and Uganda shared insights on challenges and opportunities for meaningful engagement, emphasising the need for a bottom-up, multi-stakeholder approach to internet governance.


Speakers:


– Ananda Gautam: Youth activist and organizer from Nepal


– Paola Galvez: Digital rights advocate from Peru


– Peace Oliver Amuge: Technology policy expert from Uganda


Key Challenges:


1. Internet Access and Infrastructure


Peace Oliver Amuge highlighted the lack of internet access in many regions, particularly in Africa, as a fundamental challenge. This issue was seen as a prerequisite to addressing other governance concerns.


2. Capacity Building and Funding Limitations


All panellists agreed on the critical need for capacity building among diverse stakeholders, including civil society, policymakers, and parliamentarians. Funding challenges were identified as a significant barrier to participation in global forums and capacity-building efforts.


3. Power Asymmetries


Paola Galvez drew attention to the power imbalances present in global forums, which can hinder meaningful participation from global majority representatives.


4. Fragmented Efforts


Peace Oliver Amuge pointed out the lack of a harmonised agenda among civil society groups, leading to fragmented efforts that dilute the impact of global majority voices in governance discussions.


5. Inclusive Timing and Venues


The importance of considering timing and venue accessibility for global forums was emphasized to ensure broader participation from diverse regions.


Strategies for Improvement:


1. Multi-stakeholder Engagement


The panellists unanimously supported a multi-stakeholder approach to internet governance:


– Peace Oliver Amuge advocated for a bottom-up approach starting from the grassroots level.


– Ananda Gautam emphasised the importance of engaging government and private sector support.


– Paola Galvez focused on leveraging existing networks and alliances.


2. Localisation of Global Issues


Speakers stressed the importance of translating global discussions to local contexts, making abstract global discussions relevant to national and local realities.


3. Targeted Outreach and Education


Successful examples were shared of engaging underrepresented groups:


– Ananda Gautam highlighted the Youth IGF Nepal initiative.


– Peace Oliver Amuge mentioned APC’s African School on Internet Governance.


4. Improving Funding Mechanisms


Panellists agreed on the need for more inclusive and creative funding opportunities to ensure meaningful participation from the global majority.


5. Leveraging Existing Structures


Peace Oliver Amuge suggested utilising existing structures like national and regional IGFs to foster multi-stakeholder dialogue and build capacity.


6. Ensuring Meaningful Representation


Paola Galvez emphasised the importance of direct representation from the Global South in governance discussions.


7. Research and Stakeholder Mapping


The need for comprehensive research and mapping of stakeholders and resources was highlighted as crucial for effective engagement.


8. Youth Initiatives


The role of youth-led initiatives in internet governance was emphasized, with examples of successful youth engagement shared by the panellists.


Conclusion:


The discussion revealed a high level of consensus among speakers on the key challenges and potential solutions for improving global internet governance. While progress has been made, continued efforts are needed to create a truly inclusive and representative system. The panellists maintained an optimistic outlook, offering concrete examples of successful approaches and emphasising the shared purpose of working towards more equitable and effective internet governance processes.


Moving forward, the key areas for focus include improving internet access and infrastructure, enhancing capacity building efforts, developing more inclusive funding mechanisms, and ensuring meaningful representation from diverse regions in governance discussions. By addressing these issues and leveraging initiatives like the Global Digital Compact and WSIS+20 process, stakeholders can work towards a more inclusive and effective global internet governance framework that truly represents the needs and perspectives of the global majority.


Session Transcript

Amara Shaker-Brown: International Media Assistance, and the Center for International Private Enterprise have been running the Open Internet for Democracy Initiative, a program to build a network of open internet advocates who champion the democratic values and principles to guide the future development of the internet and how it works. Our Open Internet Leaders Program, two of which are with us, along with their mentor today, have been a key part of our work. Our leaders are emerging experts in digital rights and open internet issues from the global majority, representing civil society, the media, and the local private sector. So I will quickly introduce our panelists, and then we can jump right in. So we’ll start with Ananda. Ananda Gautam is a passionate youth leader focused on democratizing human rights in the digital age, advocating for inclusive technology policies since 2018. His commitments include leading global initiatives like the Internet Society Youth Standing Group and co-founding Open Internet of Paul, while also promoting digital freedom and cybersecurity. Through research, advocacy, and capacity building initiatives, he strives to empower young people in shaping the future of internet governance. And he was one of our leaders. Paola Galva Calergos is a tech policy consultant dedicated to advancing ethical AI and human-centric digital regulation globally. She holds a master’s of public policy from the University of Oxford, congratulations, recently, and serves as the founding director of IDON AI Lab, UNESCO’s lead AI national expert in Peru, and as a team leader for the Center of AI and Digital Policy. And Peace Amuge is the Africa Regional Strategy Lead with the Association for Comprehensive Communications, where she works on the intersection of technology, human rights, and gender. She’s a member of the UN Multistakeholder… Stakeholder Advisory Group with the Internet Governance Forum. So she has been very busy. And she is also a member of the advisor group for our Open Internet for Democracy Initiative. So thank you, all three of you, for taking the time and for coming for this. And we will hop right in with, you know, just an easy, simple question on Internet governance. So, Peace, I guess we’ll start with you and move across. How can global majority advocates maximize the impact of key themes emerging from these 2024 fora, such as NetMundial, G20, WSIS, to advance a free, open and interoperable Internet in their regions? So taking some of these global themes and applying or advocating for them at a regional level.


Peace Oliver Amuge: Thank you very much, Amara and everyone else who has joined us in this room. I’m privileged to be part of this panel and have these very key discussions. I think to me, what would be important is to, first of all, unpack what we mean by free, open, you know, Internet, you know, what does that mean to us? What does that mean to the different stakeholder groups that we have, to the different communities that we are talking about, the different context? I think when we access, when we unpack that, then we will know what we exactly want, because we need to have some clarity when we talk about the open, free Internet that we want to advance. And so then when we have this clarity, then we will know, set our priorities and know if we’re talking about affordable access, if we’re talking about local content, you know, I think it would be important when we have that clarity or unpack, know what we mean. And then we start to have more details. and create these priorities that I’ve just mentioned, and then have our agenda set. And then we go into having our very harmonized strategy for us to benefit or leverage from all these processes that are happening. And I think then more and more we need to establish and we need to engage and embrace collaborations and synergies across the different stakeholders and embrace the multi-stakeholder approach. And also when we still talk about internet, for me as someone that comes from the African region, I think a very important, crucial aspect of the internet to talk about is the access. We cannot talk about any other things, any other issues if we are not connected, meaningful connectivity, having communities, local communities connected, because we still have very many people who are not connected. So I think for me, that is something that I would want to say as we start this conversation, setting our priorities. And for me, as someone that comes from the African, I think access is so important.


Paola Galvez: Thank you. Paula. Thank you, Maira. Hello, everyone. Thank you for joining us. Let me give you a perspective from someone that is from Latin America, I’m from Peru. It’s a developing country that had participated actively in these discussions, but it’s not well, okay. Can you hear me well now? Yes, thank you. So let me provide my perspective as someone that comes from Peru, a developing country that has participated in this process that are very well aware of what’s happening. but at the same time that has critical challenges happening on the ground. I can say political crisis that sometimes make that these very critical digital topics get overlooked. I do believe that we need to maximize the impact of all these fora to look ahead what’s happening after 2025. And for that I have three ideas I would like to share. On one hand, a strategic alignment. I believe we are several actors and we need to work as a community with coalition building to bring the voices to the ones that are making the decision. They are on the table. Second, try to bring localized narratives. It can feel abstract sometimes when we think about these global discussions, right? But they are absolutely important for our national and local realities. So how can we localize these topics? By thinking local examples, right? How to provide them to our national context. This is very important. And third but not least, I believe monitoring implementation is very, very important. Advocating for clear mechanisms that track commitments made during these fora. This is usually the government implementing them. But there are mechanisms in place. So civil society, media, and also the private sector can play a vital role in this. I’m going to be brief so we can discuss among others. So I’ll keep it there.


Amara Shaker-Brown: Go ahead, Ananda.


Ananda Gautam: Thank you. So I think as peace started the context of Africa, I come from a global south as well. certain issues that are still like if we Do the global context we have more than I think about 35 percent people still not connected to the internet and then There is another issue the people who are recently connected to the internet Doesn’t have the enough capacity to have the full leverage of this Platform or the opportunities that are unwinded with access to the internet So in my context the two major issues will be the access and empowerment. I call it So first thing is people should be access regardless of any barrier and the barrier could be either access to infrastructure or like there might be other barriers like language and then like Affordability and other accessibility issues like how Person with disabilities can have access to internet and we can call it on a broad topic How we can ensure the meaningful access to the internet another thing is after having the access We will be discussing about the human rights in digital perspective. Are we our human rights? protected when we are being Online or we are leveraging different technologies now We are talking about AI governance and we are also now talking about AI gap along with the digital divide AI divide is also a very concerning topic today, so How do we ensure that? These policies are this kind of forums. We have like national forums regional forums and we are just Ahead of the which is plus 20 review process. We just passed the global digital compact and which has very Positive message, but we are yet to see the which is plus 20 process. So One of the major challenges this kind of forums are not binding, it doesn’t have a kind of ripple effect, you know, people are not bound, people are like any stakeholder is not bound to implement the takeaways, but if we have any mechanism that we could, that governments could uphold the values that are taken away from this forum is very important. And me actually focusing on the perspective of the young people, I believe young people have a very big stake because they are, I believe they are the biggest stakeholder of the internet and they are the future internet governance leaders who need to be equipped with enough capacity, enough knowledge so that they can decide what is the future of the internet they want. So we, the three things I want to take away from this first round of our sharing is that we need to complement access with empowerment and we should not forget meaningful access. We need capacity building of young people and marginalized communities so that they can leverage and then they can actually make their voices heard in the global forums and then like the deliberations of this forum should be somehow taken back to the communities so that we uphold the human rights. I will break here and go into another round of discussion. Thank you.


Amara Shaker-Brown: Thank you all. Yeah, I think we have all heard sort of the issues around accessibility being the, you know, the first barrier before you can even get into how the internet works. So IGF is a key multi-stakeholder forum and there have been a couple other ones this year and we, you know, multi-stakeholder is the key word of the year. But what in your view are the key challenges and opportunities for effectively implementing a multi-stakeholder approach to internet governance, especially ensuring, and I think this is the key word, meaningful representation, not just having people in the room as a standard bearer for the global majority or for their country, but having meaningful representation in for a, like the IGF, WSIS, or like the past summit of the future. And I guess we can go the other direction. So Ananda, we’ll start with you and then Paula, then peace, unless someone has a burning desire to start.


Ananda Gautam: Okay, then I think this is a very kind of challenging question, but my point of view being engaged with national regional initiatives for a while, and then the biggest asset that internet governance has, is it’s, I think 270 plus national regional initiatives. Being said, we are the United Nations backed initiatives. I see that the United Nations should also have some form of cooperations with national governments and other international agencies that they adhere with these initiatives that, so that we can implement the multistakeholder discussions, not only at the global level, but also at the regional and local level. We have many challenges. I also chair the USIGF Nepal, and then like many UN agencies themselves doesn’t know about internet governance forum still, so there is a huge gap. They are leading many digital initiatives, but like they are not aware that these deliberations are happening, and these are led by the internet governance forum. So I see it kind of like structural coordination, what do you call, gap. And if this coordination. could be made, the United Nations is one of the major institutions that is a global actor that can foster partnership and that can develop coordination mechanism with all stakeholders including governments. If these deliberations are done like we do have many, I think this year IGF has spent many funds in bringing in the parliamentarians and if we can make those parliamentarians take these initiatives back to their countries and support the national and regional initiatives, I see by some time, not by tomorrow but like by in a couple of years, if we can make them realize and we need to make those kind of mechanisms, I see those mechanisms lacking. In global level, we come here, it is so good. We go to regional IGF, it is a bit less, private sector are not very interested, government participation is always minimal, it is overcrowded by civil society. So it should be when we call equal footing on multistakeholder bodies, there should be equal participation as well. So we need to ensure or like IGF needs to have initiatives and coordination mechanism with all UN specialized agencies and all the projects or the initiatives that are being done either it be on eliminating digital divide or it could be on digital safety or it could be on AI, whichever UN agency is working on, if they align their efforts, aligned with a global digital compact and the deliberation of Internet Governance Forum, I think this is the best way we can get this thing done from bottom-up approach. Thank you.


Paola Galvez: So I’ll continue as you said Amara, I’m just taking this off to not overhear me, but you let me know if you guys cannot hear me well, thank you. I see different challenges. First of all, sometimes these four vital discussions are happening in places that are far and it requires lots of resources. First of all monetary, so funding is one good barrier that we have and then let me go for the opportunity. That’s why sometimes we need to look out for funding support or organizations that are able to support civil society, media, to come and join academics as well. All the stakeholders, because small organizations cannot join on their own. I remember if I was able to come into an idea for the first time in 2019, it was thanks to Internet Society and a fellowship called the ISO Youth Ambassadors. So these are great opportunities that we can look out for to come and have meaningful representation. This is one thing. Second, and that goes tied to another challenge, I’d say for this, for developing countries as we come from, is the lack of knowledge and experience. And sometimes it sounds the UN Internet Governance Forum, like NetMundial, these big events where only experts come, right? And that’s far from the truth. If we can really understand about this process, actually what we want is people that are really passionate and want to have a stake in the future of the Internet, right? But it takes, and this is the opportunity, to bring more information, make it accessible for everyone. And I think the three of us that are in the table, we try to make this in our localities, like informing what is an Internet Governance Forum. Internet Governance Forum, right, at a local level, at a regional level, to try to motivate other organizations to come and join. That’s why the newcomer sessions are very important, because it can be a monster. If you see the app shed, there are so many sessions happening at the same time, so having also mentors and somebody that can pair you, pair and follow you during the IDF, that could be great, and I think that’s a great use case of best practice. I remember my first time it was somebody from ISOC that joined me and guided me a bit on how to make the most out of this forum. One third challenge, I’d say power asymmetries, because even when we are here in the same table, let’s say, we are all here, then we will have, I think, an open Q&A, so everyone can comment and make a question. There are still power asymmetries that, you know, in this global forum, we can reflect agendas shaped by wealthier nations, more developed countries, richer or bigger corporations that can hold bilateral meetings that sometimes a small CSO don’t know is happening or don’t know could happen, right? And for the opportunity, I could say creating more inclusive mechanisms to push for everybody to be really at the exact same power in the conversation so that we can all have a meaningful participation and that our opinions can get heard as it should be. Thank you.


Peace Oliver Amuge: Thanks, Paul and Amanda, and I totally agree with what you said, and so I will just add on to a few things, and one challenge that I see and I want to repeat it is the capacity building. And I think the lack of capacity that exists among the different stakeholders. But I think something that we must agree is that we have had over time, at least there are some steps that have been made in regards to civil society. You know, lots of initiatives have been going on to build the capacity of civil society organizations or participants. And not to say that they are already there, no. But at least I want to acknowledge that there are some steps, you know, there are some strides that have been made. But where we need to also put our focus is the judiciary, the parliament, the law enforcers, you know, the government, you know, the private sector. We need to look at these other stakeholder groups to build their capacity. Because if I look at Africa, at the region, we’ve had organizations like Equality Now, APC, ICT for Change, Kiktenet, you know, NDI, a couple of organizations that have been really trying to have capacity built. So I think we need to again kind of map out, you know, and look at where are the gaps, who are the people that we need to focus on in terms of capacity building. And I want to also say that yes, I really agree on the funding opportunity because this will also facilitate the capacity building, you know, bridging the skills gap, you know, that I am mentioning. And also ensuring that, you know, we have meaningful participation from these stakeholder groups when we talk about, you know, when we are at the IGF, when we talk about the different conversations that are happening at WSIS+, the GDC that just ended. And then one challenge that is there, that we see is limited, harmonized agenda. As civil society, do we have an agenda, you know, from a national, at a national level, going to the sub-regional, pushing forward to the regional, and coming. at the global level, you know, when we all see it, like Paula was saying that we will all have different agendas, so I think we need to have a kind of harmonized agenda or strategy, and also there is still a problem of fragmented efforts. We are doing so much, you know, even when we talk about the capacity that I just stepped away from, it is very fragmented. We need to harmonize our efforts as we talk about these challenges that we see. I think also we need to look at, be very inclusive when we make programs. We need to be very flexible and acknowledge and be aware of the different contexts, and look at our participants as people that have different challenges and abilities. You know, looking at the women, looking at the persons with disability, and looking at the timing when these conversations are happening. I think you all remember when the GDC consultations were happening online. It was not very inclusive, you know, for other people. I think Amanda, like you, it was happening when it was very late for you. It was happening in my afternoons. And also, let’s look at the IGF now. It’s happening around Christmas time when some people are already off work, you know. Some people are working until 15th and taking their break. So I think this kind of timing should be very inclusive, and we need to look at them as well. And the challenge of also venues. If these conversations are happening in Geneva, happening in New York, it’s not inclusive. You talk about, even when you have maybe funding to travel there, you might be limited in terms of visa. You will not get visas. So I think these are some of the little things that might be ignored, but they are a big challenge to our participations and engagement, and for us to have the impact that we desire. So, and again, to say what Amanda mentioned, that having these other stakeholders in the room, having a multi-stakeholder conversation, and not having more of maybe civil society, but having. everybody in the room is one of the challenge that we still continue to have. Thank you.


Amara Shaker-Brown: Thank you. Yeah, building off that a little bit, have you seen in your work in the past year, or do you have ideas in sort of the future of more effective ways that civil society, media, and the private sector can collaborate to engage with these multilateral processes or even civil society can sort of help bridge that gap and pull other stakeholders in? I know some of those divides are hard to bridge, but if there’s any work that you have seen either at the local level or at the global level of ways to really bring those non-governmental stakeholders together, especially trying to sort of engage the private sector.


Peace Oliver Amuge: Thank you. Amara, so Paula was pointing at me that you, yes. Okay, yes. So yeah, I think one thing that I would suggest is definitely the bottom-up approach, you know. When we come here, we meet with, usually it’s a very habit that probably many people experience. We meet here, our governments, we meet here, our members of parliament. But when we go back to our different countries or regions, we then don’t have any conversations happening. And I think we should leverage on structures like the IGF that is very, starts from the grassroots. I think we need to embrace and leverage on it for us to have meaningful conversations and have everybody participating and harmonize our efforts and avoid fragmented efforts, put together our agenda. But one thing that I need, that I also want to mention is research, you know, we need to have research done. We need to map the stakeholders that are doing different, putting different efforts in place. We need to map the existing knowledge. We need to map the resources that we have and at all levels really, starting from national, from grassroots and building all through, coming to the global level and putting a funding mechanism in place is a very key thing that we need to focus on as well and have then after doing that, the research, all this mapping that we have and then we need to also leverage on the power of the collective, you know, the collaborations, the synergies that we are building, we need to leverage on this, the synergies that we build and then come up with this strategy all together because I think we really need to emphasize collaborations and synergy and a multi-stakeholder approach. I think that’s what I want to say now, thank you.


Paola Galvez: Okay, let me just build on what Bice just said. The power of networks and alliances is huge, literally. I can name two examples and for the presence of Latin America, for instance, in the Global Summit for the Future, I see Al Sur, the coalition of several digital rights NGOs in Latin America happen, and they all went to New York, participated. I see this as a… fantastic example of how Latin American organizations can come as a unified voice, right? Another example. For instance, IGF, this is my fifth IGF and most of the time this is the space where I see Ananda every year, for instance, if there is another meeting we will not meet, but the immense potential of bringing new people to the conversation, new organizations probably, or somebody that is working on a specific topic, it doesn’t have to be an expert on the Internet, but if you’re talking about children or financial services, it’s good to have their opinion too, right? And they don’t know about IGF, but for instance, I built on the Center for AI and Digital Policy Community to work on a proposal, this is an example, but algorithmic transparency, and getting to know to the community what is going to happen in December, the IGF, and many people reach out asking, what is the IGF, right? So this is a good example, and there are newcomers participating in this IGF, and they will provide expertise and evidence for new regulation and the future of AI, this is an example once again. But I do think good practices are happening. So we need to, if you’re thinking about civil society and media, we need to be very creative, right? Because most of us know about the UN call for travel support that exists, but unfortunately resources are limited, right? They cannot fund everyone that applies, so let’s be creative and try to find other governments that can have funding, or universities. So good practices for academics are very good allies that they want us to continue our research and make reports on the discussions that are happening. So I think this is a good example. can be use cases and maybe people that are joining online and have some other ideas would be great to share on the chat or here in the room because we need to act as a community to start changing it, bringing the voice of the global majority to this discussion and that it to be very meaningful.


Ananda Gautam: Thank you Paola and Peace, being the last speaker I have the privilege to opt in for both of their voices and then like challenges to bring on something new. So my perspective is like it is a collaborative approach, as I mentioned before we have challenges but like I gave the invitation from IGF to my minister of IT but he didn’t care because he doesn’t know the value of this kind of meetings and that is why we don’t have much support in many countries. If we could establish importance of this conference or this kind of events, multi-stakeholder forums and the government start taking it seriously this will create another kind of environment, they want to send their young people attending these events, maybe they can secure funding from government as well. That is one of the options we need more of the government support which is very very lacking and then if government starts supporting this kind of initiatives, another collaborative approach is we need equal collaboration of civil society and private sector. The private sector works in their business activities and which are very much related to the governance of their, if they are a tech company the governance of technology is going to affect their business as well. So they need the help of civil society to make responsible use of the technologies and to make awareness and capacity building of this kind of issues. I think private sector can help civil society with some kind of we can say CSR. It is like a kind of corporate social responsibility so that they help civil society to represent this kind of issues in this kind of forums. They can have like tech companies like Meta, TikTok or whatever tech companies are there. They can help civil society to send their representative to these forums. They can help them to create the capacity building initiatives, to create awareness programs, to make responsible use of technology. That is kind of collaboration that is required and this kind of forum should force the environment for those collaborations. It should force the environment to bring government in the board so that they actually take this thing seriously. If they think they take this thing seriously, private sector would also definitely look these things at more responsible way and this is what we ideally believe as the multi-stakeholder collaboration. So I call for this kind of multi-stakeholder collaboration. Another part that I have believed is I started Youth IGF Nepal back in I think 2022 and within three years of establishment we have been able to make an impact that there is a Youth IGF in Nepal. We need to hear them. So our ministries often call us for the consultation meetings and our minister was so happy that I delivered him the letter from the IGF. Although he didn’t come, this is an impact. By his third year he might come. We never know. People are conscious that there is Internet Governance Forum, there is Youth Internet Governance Forum in Nepal. We should hear them. They are young voices. We should include them. So I’ve been taking part in different consultations, and our community is growing. For the first year, we trained 100 people. And coming to the third year, we have trained more than 300 people. And few people are part of now Internet Society Youth Ambassadors Program. Few people are going to APR IGF. Few people are going to regional IGF. And they will take this deliberation back to their community. So another thing is, while we come and join this kind of forum, we get the opportunity, we should give something back to the community so that this community thrives and create an environment that will impact. It is not an overnight change, but we need to make our efforts. And UN IGF should make a lot of effort, because this time, as Paula mentioned, only 10 people got this travel support. All of the fund was actually invested in the parliamentarians. So if government could have sent their parliamentarians with their funding, and those funds could have been utilized to bring more stakeholders, it could have been wonderful. Or even, there’s so much of other opportunities that UN can pull in to support in bringing more young people. And one of the best examples I can give is from Brazil. There is a CGI that supports more than 15 people every year bringing to the IGF and other regional events. That’s why you might be seeing many Brazilian young people in this IGF. That’s because there is a support mechanism that has been intact. So we need those kind of support mechanisms. I think that’s it from my side. Thank you so much.


Amara Shaker-Brown: Great. Thank you all. We are now going. to open it up to questions either in the room or online. Please feel free to put your question in the chat or to raise your hand. I see we have a question in the room. Yes, we have someone in the room and she will take the mic.


Audience: Thank you. It’s great to hear about the youth IGF. I’m really interested to know more about it. I know I’m from Australia and I see that IGF is there in Australia. I’m an academic in the research space as well, so I’m really interested to know more about that. Thank you.


Ananda Gautam: Okay, so in regards to youth IGF Australia, I don’t know. I was just having this thought. I was having a conversation with Jordan from AUDA who established Australian IGF. I was about to ask him why don’t you guys start a youth IGF, but unfortunately I couldn’t. So, it is a good way when there is a national IGF you also initiate youth initiative. It will help bring more young people. At least they will start looking for what IGF is and then like most of the people contact me when they get fellowships. They go to the global forum and regional forum and then realize there is a national forum in their own country. So, this is a good one. They are having awareness and a few people want to go to the IGF and they contact me how we can go to the IGF even some people from ministry this time contacted me we want to go to the IGF what is there any funding available and then like I said we don’t have any funding available so there are very basic principles UN IGF Secretariat has created a toolkit to establish a youth IGF if you want support we from youth coalition on internet governance and like in the society youth standing group helped to establish a lot of youth initiatives we also helps to funding few initiatives what I’m not sure how much would it be for Australian people but for in the developing nations we are supporting thousand dollar each for each youth initiative and they’re starting their I think we jump started five youth initiatives supporting them from the international society youth standing group we have a very limited budget but like I want in my tenure I want to jump start few more youth initiatives so we can help you out maybe if you know Jordan already we can sit with Jordan as well because ODA is the entity that has been that was a worst for Asia-Pacific regional IGF and then like they have started Australian IGF and if you find someone from Australian IGF we can sit together and then discuss how this is and help how you guys can start your own youth initiative I think that is it


Amara Shaker-Brown: thank you great thank you thank you for your question any other questions in the room okay I have one from Priyal online question is what are some strategies that you have seen that are successful for civil society advocates to engage with their local or national policymakers So parliamentarians, judiciary, anything like that on these issues because we know sort of getting the multi-stakeholder voice into the multilateral system can be difficult. So any past success there.


Ananda Gautam: Can you put the question in text? Maybe it was so long, you know.


Paola Galvez: I can start while then, Ananda, you can jump in. Actually I have a good example because as you said, Amara, it’s hard to bring the government into multi-stakeholder discussions. But a while ago in Peru, we started with capacity building program for congressmen talking about digital economy. And that was a multi-stakeholder effort because NGO Hiperderecho participated, COMEX, which is a chamber of reuniting private sector in Peru that has a committee on the digital economy and groups, different big technology companies. We developed this. And I remember it was under the context of a period when the Peruvian government, the Peruvian congress, sorry, wanted to regulate the sharing economy and these platforms like Uber, Cabify, et cetera. So we started with these sessions so that they can understand a bit about the technology, the telecom, et cetera. It was a couple of weeks before Peru IGF and we said it would be a great opportunity so that we can all have a discussion and you can listen to what civil society, media, academics and other stakeholders have to say on this and other topics related to the internet governance. And they were interested, a bit sceptic at the moment, but we had some participants from the parliament. This is one thing. On the other… And sometimes this happens, because I was working on public consultation in Peru as part of the UNESCO AA Readiness Assessment methodology that I conducted in Peru, and we had to do this. And public, sorry, congressmen went there, but some others did not go and only sent their advisors. This is a very personal opinion, but I would say let’s not take it as a wrong way. Having their advisors is also good, because they are the ones speaking to their ears and actually they are the ones writing the bills, so that’s good too. As long as there is a commitment from the congressmen to send their team, there is a good step. So I can tell you these good examples that I have, and I hope it can be replicated, but showing them that it will benefit their work is a good way to speak to them so that they are interested in joining these discussions. Thank you.


Peace Oliver Amuge: So I want to just give also an example of what we have been doing to engage the legislators or members of parliament. We have APC convinced the African School on Internet Governance Forum, and from last year, we were able to have about 16 members of parliament coming from different African countries. And to have these members of parliament in the room with the other stakeholders like civil society, technical community, as fellows or participants of the school was very important and very key. For them to really learn, understand these issues that we talk about, because when we go back usually we want to engage them, but they do not understand these issues that we talk about. Even the reports that we publish, they don’t consume these reports, because they are long reports, they do not have the time. and the rest. So we need to pull them and bring them to this conversation that we always have. So I think having the members of Parliament, you know, join the school was really key. And again, I want to give an example still with APC, because APC is a network of other civil society organisations. And still across Africa, APC works with other civil society organisations, for instance, Kiktenet, for instance, WUGNET, that is based in Uganda. And I remember in 2022, WUGNET was able to support members of Parliament and someone from the judiciary to attend the IGF. And they did not just stop at supporting their participation at the forum. They continue to engage them back home. What next? You know, having the other stakeholders in the room and saying, OK, we were together at the global forum. What next for us? You know, what are some of the things that we need to talk about at the national level? So I think this kind of very strategised engagement needs to happen. And it should not stop by them coming once to the conversation, them coming to the meeting once. We need to continuously engage them, you know, to have their buying, to have their understanding, and also to ensure that we have people in the, on the floor of Parliament that understand the issues that we talk about. So I thought I should just share that strategy that we use. And again, even this year, we had over six members of Parliament join. And while I’m at the forum here, I’ve seen some of them again joined a parliamentary track. So you can be sure that continuous engagement of these members of Parliament, you know, will bring some change, you know, when we go and knock the doors to talk about the policies, you know, the gaps that we see, share with them the policy briefs that we come up with from our different countries or regions, they will be able to understand the issues that we are talking about. Thank you.


Ananda Gautam: Thank you, Peace, for taking this about the learnings of parliamentarians. When UN IGF Secretariat started the parliamentary track, I had proposed that they bring parliamentarians and young people together in a setting that they would contribute afterwards to getting back to their local community with the young people. So that it could be something takeaways, but it never happened. I think there is one parliamentarian track youth leaders dialogue today, but I’m not sure it is mentioned on that way or not. Another example is we have a digital freedom coalition in Nepal, and we have been working with parliamentarians on different bills that have been proposed in Nepal. So at least we have to start at some point. Being said that, everything we advocate for might not be reflected in the development, but they will start listening to it. And I have reflected that many parliamentarians are very keen to learn about these new issues and build their understanding on the emerging issues at least. So this will help in a longer run. The immediate effect might not be there, but if we advocate for it persistently and continuously, they will start listening to it. And when they feel the importance of it, they are the ones who are writing the bills. They are the ones who will be making the laws if we make them understand this is doable. That is it from my side. Thank you.


Peace Oliver Amuge: Amanda, let me just add something that I remembered just yesterday. We were having a conversation with one of APC’s member called Rudi International that comes from Congo, and he was very happy that… that from Congo, DRC Congo, they have brought, I think, five members of parliament. I might, I think maybe I forgot the number. But again, I liked what he was talking about, that okay, yes, they have come. He’s very happy that they have come, but he was thinking about what kind of sessions, guiding them, because coming to the IGF is one thing, like I just mentioned, but afterwards, are we just going to come and move around and that’s it? Laying a strategy for what engagements that they should be part of, who are the people that they should meet, and trying to organize some people to, again, give them some insights. So I think these are some of the things that we need to do as civil society, as different stakeholders, if we have these opportunities to have members of parliament or policy makers in the room, let’s go an extra step and have agenda set and have strategies laid well, so that we can have some impact. Thank you.


Amara Shaker-Brown: Thank you. We have time for one more question. If anyone on the line has a question, otherwise I have one final one that I can put. All right. So we are trying to get right global majority to meaningfully be involved. And as your global minority partners, partners from the global North, either funders or other civil society, are there specific things other than promoting, as things you’ve said, funding, more funding opportunities, more inclusive timing and location, other things that we can be carrying forward if we are in the room and you are not, other than advocating for you to be in the room. But are there any specific actions that your global North partners can be taking or supporting to help? sort of bring your messages into the conversation?


Paola Galvez: I can start. First, I think, and that reminds me to one invitation I received, but it was first day, because I’m based in Paris, but I’m Peruvian, and they were thinking I was in Paris, they invited me to a global forum, let’s say. And I mentioned I was in Lima doing this project with UNESCO. And they said, ah, then you can join online, because we don’t have much funds. And, you know, at the moment, it was a tough decision. But I was very sure of what I believe. And I said, I’m really sorry, but I don’t think it’s the same engagement when you join online and you deliver your speech as much as I’m very passionate when I when I speak, I said it will not be the same. So let’s please look for someone else who may be in Europe and that can come. In the end, they made the effort and they found the funds. So this is one thing to say. I appreciate when the Global North want to help us. But there is I don’t think there’s a replacement of Global North representative speaking for us. So I would like to reiterate the importance of being more creative and getting the funds to bring the voices of Global South representative, because there’s nothing more important than having them. We all know that the main sessions are important. But what really, really matters are the discussions that we have during the coffee breaks, or if we can have lunch and the space when we can really mention our needs, our pains, what are the challenges that we’re living. So, yeah, advocating for having more funds and because we may have the funds and then we may not have the seat on the table. And trying to look further, because if we are in a conversation where we’re discussing education in rural areas, it’s nice to see experts on education, but we must have teachers that are suffering or living the challenges of digital technology impact on education, right? I, myself, found it challenging looking for these real actors that meaningfully engage for this public consultation that I mentioned, but I know my faults, right? So I asked for help to an organization, do you know of people or communities of teachers that can come? And then an awahoon, which is a community from Peru, came all the way to Lima and participated. That’s meaningful. It’s a lot of work, but we need to do it if we want discussions to be valuable and to really have the impact on the Internet we want.


Peace Oliver Amuge: Okay, thank you. I want to add on the funding mechanism. I think we need to also acknowledge the different context, you know, content and embrace local content. But like I mentioned at the beginning, for instance, in the African region, really the issue of access is very, very important. And so then we need to look at mechanisms or that work when we look at, for instance, the access, just picking on access, the mechanism that can work like community networks, you know. So I think sometimes we are using the same approach everywhere, but we need to tailor our approaches to fit the context of our targets. So I think as we work on our different strategy, we need to ensure that we are aware of the different context and what can work. And like Paula said, that someone else cannot come and speak for us. for the local community. So I want to just emphasize that we need to always acknowledge that. And in general, we need to have gender-responsive approaches and embrace multistakeholderism. Thank you.


Ananda Gautam: Thank you, Paula and Peace. So complimenting them, my major kind of reflection is it is not about finding new funding. I think it is alignment of the existing funding. Are we giving the funds to who needed the most, or are we just distributing it? The alignment of the available funds is very important. We have so many UN agencies, and how the efforts of all the specialized agencies in line with the, now we can say, will they be aligned with the Global Digital Compact, or what WSIS Plus 20 will deliver it? And another thing is collaboration. Have we sought out for the collaborative approach from all the stakeholders? I think these two things need to be addressed so that we can have the kind of multistakeholder engagement that we are seeking. Thank you.


Amara Shaker-Brown: Great. Thank you all. And thank you all for joining us. I think we can, sorry, excuse me, wrap it up. Thank you to our panelists. Thank you for sharing your expertise, your experiences, and for giving us some ideas of what we can be doing in this next year to ensure full, meaningful participation of all stakeholders. And with that, I will say have a lovely evening to those in the room, and have a lovely day for the rest of you. Thanks so much, everyone. Thanks so much. Thank you. Thank you.


P

Peace Oliver Amuge

Speech speed

156 words per minute

Speech length

2260 words

Speech time

864 seconds

Lack of internet access in many regions

Explanation

Peace Oliver Amuge emphasizes the importance of internet access, particularly in the African region. She argues that meaningful connectivity and access for local communities should be a priority before addressing other internet governance issues.


Evidence

Peace mentions that many people in Africa are still not connected to the internet.


Major Discussion Point

Challenges in Internet Governance and Accessibility


Differed with

Ananda Gautam


Differed on

Prioritization of internet access vs. other governance issues


Fragmented efforts and lack of harmonized agenda

Explanation

Peace Oliver Amuge points out the problem of fragmented efforts in addressing internet governance issues. She argues for the need to have a harmonized agenda or strategy across different levels, from national to global.


Evidence

She mentions the need to map out stakeholders, existing knowledge, and resources at all levels.


Major Discussion Point

Challenges in Internet Governance and Accessibility


Bottom-up approach starting from grassroots level

Explanation

Peace Oliver Amuge advocates for a bottom-up approach to internet governance. She suggests leveraging structures like the IGF that start from the grassroots to have meaningful conversations and harmonize efforts.


Evidence

She mentions the need to embrace and leverage on IGF structures for meaningful conversations and participation.


Major Discussion Point

Strategies for Effective Multi-stakeholder Engagement


Agreed with

Ananda Gautam


Paola Galvez


Agreed on

Need for multi-stakeholder collaboration


Continuous engagement with policymakers

Explanation

Peace Oliver Amuge emphasizes the importance of ongoing engagement with policymakers, particularly members of parliament. She argues that this continuous engagement is crucial for building understanding and support for internet governance issues.


Evidence

She provides an example of APC’s African School on Internet Governance Forum, which included 16 members of parliament from different African countries.


Major Discussion Point

Strategies for Effective Multi-stakeholder Engagement


A

Ananda Gautam

Speech speed

144 words per minute

Speech length

2528 words

Speech time

1050 seconds

Need for capacity building among stakeholders

Explanation

Ananda Gautam emphasizes the importance of capacity building for stakeholders in internet governance. He argues that young people and marginalized communities need to be equipped with knowledge to participate effectively in shaping the future of internet governance.


Evidence

Ananda mentions the success of Youth IGF Nepal, which has trained over 300 people in three years.


Major Discussion Point

Challenges in Internet Governance and Accessibility


Agreed with

Peace Oliver Amuge


Paola Galvez


Agreed on

Importance of capacity building and education


Differed with

Peace Oliver Amuge


Differed on

Prioritization of internet access vs. other governance issues


Engaging government and private sector support

Explanation

Ananda Gautam argues for the need to engage both government and private sector support in internet governance initiatives. He suggests that private sector companies can help civil society through corporate social responsibility initiatives.


Evidence

He mentions the potential for tech companies to help civil society send representatives to forums and create capacity building initiatives.


Major Discussion Point

Strategies for Effective Multi-stakeholder Engagement


Agreed with

Peace Oliver Amuge


Paola Galvez


Agreed on

Need for multi-stakeholder collaboration


Better alignment of existing funding

Explanation

Ananda Gautam argues that the issue is not about finding new funding, but better aligning existing funds. He suggests that funds should be given to those who need them most, rather than just distributed broadly.


Evidence

He mentions the need to align efforts of UN agencies with the Global Digital Compact or WSIS Plus 20 outcomes.


Major Discussion Point

Improving Global Majority Representation


Agreed with

Peace Oliver Amuge


Paola Galvez


Agreed on

Funding challenges for participation in global forums


P

Paola Galvez

Speech speed

144 words per minute

Speech length

2090 words

Speech time

869 seconds

Power asymmetries in global forums

Explanation

Paola Galvez highlights the issue of power asymmetries in global internet governance forums. She argues that even when all stakeholders are present, agendas can be shaped by wealthier nations or larger corporations.


Evidence

She mentions that smaller civil society organizations may not be aware of or able to participate in bilateral meetings that occur during these forums.


Major Discussion Point

Challenges in Internet Governance and Accessibility


Leveraging networks and alliances

Explanation

Paola Galvez emphasizes the importance of networks and alliances in improving participation in internet governance forums. She argues that these collaborations can help bring unified voices and new perspectives to the discussions.


Evidence

She provides examples of Al Sur, a coalition of digital rights NGOs in Latin America, and her work with the Center for AI and Digital Policy Community.


Major Discussion Point

Strategies for Effective Multi-stakeholder Engagement


Agreed with

Peace Oliver Amuge


Ananda Gautam


Agreed on

Need for multi-stakeholder collaboration


Providing more funding opportunities for participation

Explanation

Paola Galvez argues for the need to provide more funding opportunities for participation in internet governance forums. She emphasizes the importance of in-person participation for meaningful engagement.


Evidence

She shares a personal experience where she advocated for funding to attend a forum in person rather than participating online.


Major Discussion Point

Improving Global Majority Representation


Agreed with

Peace Oliver Amuge


Ananda Gautam


Agreed on

Funding challenges for participation in global forums


Ensuring meaningful in-person participation

Explanation

Paola Galvez stresses the importance of ensuring meaningful in-person participation from Global South representatives. She argues that there is no substitute for having voices from the Global South present in person at these forums.


Evidence

She mentions the importance of informal discussions during coffee breaks and lunches for sharing needs and challenges.


Major Discussion Point

Improving Global Majority Representation


Agreements

Agreement Points

Importance of capacity building and education

speakers

Peace Oliver Amuge


Ananda Gautam


Paola Galvez


arguments

Lack of capacity that exists among the different stakeholders


Need for capacity building among stakeholders


Lack of knowledge and experience


summary

All speakers emphasized the need for capacity building and education among various stakeholders to improve participation in internet governance processes.


Funding challenges for participation in global forums

speakers

Peace Oliver Amuge


Ananda Gautam


Paola Galvez


arguments

Funding opportunity because this will also facilitate the capacity building


Better alignment of existing funding


Providing more funding opportunities for participation


summary

All speakers highlighted the importance of addressing funding challenges to ensure meaningful participation from diverse stakeholders in global internet governance forums.


Need for multi-stakeholder collaboration

speakers

Peace Oliver Amuge


Ananda Gautam


Paola Galvez


arguments

Bottom-up approach starting from grassroots level


Engaging government and private sector support


Leveraging networks and alliances


summary

All speakers agreed on the importance of multi-stakeholder collaboration and engagement in internet governance processes, emphasizing the need for inclusive participation from various sectors.


Similar Viewpoints

Both speakers emphasized the importance of ongoing, meaningful engagement with policymakers and ensuring in-person participation from Global South representatives in internet governance forums.

speakers

Peace Oliver Amuge


Paola Galvez


arguments

Continuous engagement with policymakers


Ensuring meaningful in-person participation


Both speakers highlighted the challenges of internet access and capacity building in developing regions, particularly in Africa and Nepal.

speakers

Peace Oliver Amuge


Ananda Gautam


arguments

Lack of internet access in many regions


Need for capacity building among stakeholders


Unexpected Consensus

Importance of local context and representation

speakers

Peace Oliver Amuge


Paola Galvez


Ananda Gautam


arguments

Fragmented efforts and lack of harmonized agenda


Ensuring meaningful in-person participation


Engaging government and private sector support


explanation

All speakers unexpectedly agreed on the critical importance of considering local context and ensuring genuine representation from diverse regions in internet governance processes, despite their different geographical backgrounds and areas of expertise.


Overall Assessment

Summary

The speakers showed strong agreement on the need for capacity building, improved funding mechanisms, multi-stakeholder collaboration, and meaningful representation from diverse regions in internet governance processes.


Consensus level

High level of consensus among the speakers, implying a shared understanding of key challenges and potential solutions in improving global internet governance. This consensus suggests that these areas could be focal points for future initiatives and policy development in the field of internet governance.


Differences

Different Viewpoints

Prioritization of internet access vs. other governance issues

speakers

Peace Oliver Amuge


Ananda Gautam


arguments

Lack of internet access in many regions


Need for capacity building among stakeholders


summary

Peace emphasizes the primary importance of internet access, particularly in Africa, before addressing other governance issues. Ananda, while acknowledging access, places equal emphasis on capacity building for effective participation in governance.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around prioritization of issues (access vs. capacity building) and strategies for funding and stakeholder engagement in internet governance.


difference_level

The level of disagreement among the speakers is relatively low. Their perspectives are largely complementary, focusing on different aspects of the same overarching goals. These minor differences in approach could actually lead to a more comprehensive strategy for improving internet governance and accessibility in the Global South if integrated effectively.


Partial Agreements

Partial Agreements

Both Paola and Ananda agree on the need for improved funding for participation in internet governance forums. However, Paola emphasizes creating new funding opportunities, while Ananda argues for better alignment of existing funds.

speakers

Paola Galvez


Ananda Gautam


arguments

Providing more funding opportunities for participation


Better alignment of existing funding


All speakers agree on the need for broader stakeholder engagement, but propose different strategies: Peace advocates for a bottom-up approach, Ananda emphasizes government and private sector involvement, and Paola focuses on leveraging existing networks and alliances.

speakers

Peace Oliver Amuge


Ananda Gautam


Paola Galvez


arguments

Bottom-up approach starting from grassroots level


Engaging government and private sector support


Leveraging networks and alliances


Similar Viewpoints

Both speakers emphasized the importance of ongoing, meaningful engagement with policymakers and ensuring in-person participation from Global South representatives in internet governance forums.

speakers

Peace Oliver Amuge


Paola Galvez


arguments

Continuous engagement with policymakers


Ensuring meaningful in-person participation


Both speakers highlighted the challenges of internet access and capacity building in developing regions, particularly in Africa and Nepal.

speakers

Peace Oliver Amuge


Ananda Gautam


arguments

Lack of internet access in many regions


Need for capacity building among stakeholders


Takeaways

Key Takeaways

Internet accessibility remains a major challenge, especially in developing regions


Capacity building is needed across all stakeholder groups, not just civil society


Multi-stakeholder collaboration and a bottom-up approach are crucial for effective internet governance


More inclusive and creative funding mechanisms are needed to ensure meaningful participation from the global majority


Local context and tailored approaches are important when addressing internet governance issues


Resolutions and Action Items

Leverage existing structures like national and regional IGFs to foster multi-stakeholder dialogue


Create more inclusive mechanisms to equalize power dynamics in global forums


Develop strategies to engage parliamentarians and policymakers in internet governance discussions


Seek out and support youth initiatives in internet governance


Unresolved Issues

How to effectively engage private sector stakeholders in internet governance processes


Ways to ensure government support and participation in multi-stakeholder forums


Methods to harmonize fragmented efforts across different stakeholder groups


Strategies to address power asymmetries in global internet governance discussions


Suggested Compromises

Accepting advisor participation when parliamentarians cannot attend in person


Balancing online and in-person participation to increase inclusivity while recognizing the importance of face-to-face interactions


Reallocating existing funding rather than seeking new sources to support global majority participation


Thought Provoking Comments

I think to me, what would be important is to, first of all, unpack what we mean by free, open, you know, Internet, you know, what does that mean to us? What does that mean to the different stakeholder groups that we have, to the different communities that we are talking about, the different context?

speaker

Peace Oliver Amuge


reason

This comment highlights the importance of clearly defining terms and considering different perspectives before diving into solutions. It sets the stage for a more nuanced discussion.


impact

This shifted the conversation to focus more on the specific needs and contexts of different regions and stakeholders throughout the rest of the discussion.


I believe we are several actors and we need to work as a community with coalition building to bring the voices to the ones that are making the decision. They are on the table. Second, try to bring localized narratives. It can feel abstract sometimes when we think about these global discussions, right? But they are absolutely important for our national and local realities.

speaker

Paola Galvez


reason

This comment provides concrete strategies for improving engagement and impact in global internet governance discussions. It emphasizes the importance of both coalition-building and localizing global issues.


impact

This comment sparked more discussion about specific ways to engage local stakeholders and translate global issues to local contexts throughout the rest of the conversation.


One of the major challenges this kind of forums are not binding, it doesn’t have a kind of ripple effect, you know, people are not bound, people are like any stakeholder is not bound to implement the takeaways, but if we have any mechanism that we could, that governments could uphold the values that are taken away from this forum is very important.

speaker

Ananda Gautam


reason

This comment identifies a key challenge in translating forum discussions into real-world impact. It raises important questions about accountability and implementation.


impact

This led to further discussion about ways to increase the impact and accountability of global internet governance forums.


I appreciate when the Global North want to help us. But there is I don’t think there’s a replacement of Global North representative speaking for us. So I would like to reiterate the importance of being more creative and getting the funds to bring the voices of Global South representative, because there’s nothing more important than having them.

speaker

Paola Galvez


reason

This comment directly addresses power dynamics in global discussions and emphasizes the importance of direct representation from the Global South.


impact

This comment shifted the discussion to focus more on specific ways to increase meaningful participation from Global South representatives, rather than just having others speak on their behalf.


Overall Assessment

These key comments shaped the discussion by consistently bringing the focus back to practical, actionable strategies for improving global internet governance. They emphasized the importance of clear definitions, local context, accountability, and direct representation from the Global South. This led to a rich discussion that balanced high-level principles with specific, on-the-ground realities and challenges.


Follow-up Questions

How can we unpack and clarify what is meant by a free, open Internet in different contexts and for different stakeholder groups?

speaker

Peace Oliver Amuge


explanation

This is important to establish clear priorities and agendas for advancing Internet governance in different regions.


How can we improve access to meaningful connectivity, especially in regions like Africa where many are still unconnected?

speaker

Peace Oliver Amuge


explanation

This is crucial for ensuring equitable participation in Internet governance discussions and benefits.


What mechanisms can be developed to track and monitor implementation of commitments made during global Internet governance fora?

speaker

Paola Galvez


explanation

This would help ensure accountability and progress on agreed-upon goals.


How can we address the ‘AI divide’ alongside the digital divide?

speaker

Ananda Gautam


explanation

This is an emerging concern as AI becomes more prevalent in technology and governance.


What strategies can be employed to make global Internet governance discussions more binding or impactful at national levels?

speaker

Ananda Gautam


explanation

This would help translate global discussions into concrete actions and policies.


How can we improve coordination between UN agencies and national/regional Internet governance initiatives?

speaker

Ananda Gautam


explanation

Better coordination could lead to more effective implementation of Internet governance principles.


What are effective ways to build capacity among different stakeholder groups, including judiciary, parliament, and law enforcers?

speaker

Peace Oliver Amuge


explanation

This is necessary to ensure all relevant parties can meaningfully participate in Internet governance discussions.


How can we create more harmonized agendas and strategies across different levels (national, regional, global) of civil society engagement?

speaker

Peace Oliver Amuge


explanation

This would help create a more unified and impactful civil society voice in Internet governance.


What research is needed to map existing stakeholders, knowledge, and resources in Internet governance across different levels?

speaker

Peace Oliver Amuge


explanation

This would help identify gaps and opportunities for more effective collaboration and resource allocation.


How can we better align existing funding in Internet governance to ensure it reaches those who need it most?

speaker

Ananda Gautam


explanation

This could lead to more effective use of limited resources and better representation in Internet governance discussions.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #179 Navigating Online Safety for Children and Youth

WS #179 Navigating Online Safety for Children and Youth

Session at a Glance

Summary

This panel discussion focused on online safety for children and youth in the digital age. Panelists and audience members from various countries shared insights on the challenges and potential solutions for protecting young people online.

Key issues raised included the need for age-appropriate design in digital platforms, the importance of digital literacy education for both children and parents, and the challenges of implementing effective age verification systems. Participants highlighted the global nature of online risks, with similar issues affecting youth across different regions.

There was consensus that online safety requires collaboration between multiple stakeholders, including tech companies, policymakers, educators, and parents. Panelists emphasized the importance of involving children themselves in discussions about online safety policies and practices.

The discussion touched on the tension between implementing universal standards for online safety and adapting approaches to diverse cultural and legal contexts. Some argued for principle-based rather than prescriptive regulations to allow for flexibility across regions.

Participants debated the effectiveness of outright bans on youth access to certain platforms versus implementing robust safety-by-design principles. The role of tech companies in proactively ensuring child safety was a recurring theme, with some calling for greater corporate responsibility.

The need for improved digital literacy programs, both in schools and for parents, was widely agreed upon. Participants also stressed the importance of building children’s confidence and awareness to navigate online risks, rather than relying solely on external protections.

Overall, the discussion highlighted the complex, multifaceted nature of ensuring online safety for youth and the ongoing need for innovative, collaborative approaches to address emerging challenges in the digital landscape.

Keypoints

Major discussion points:

– The need for collaboration between stakeholders (tech companies, parents, educators, policymakers) to ensure online safety for children and youth

– The importance of designing safety features by default into online platforms, rather than relying solely on parental controls

– Challenges in implementing global standards for online safety given different cultural and legal contexts across countries

– The role of digital literacy education for children, parents and educators

– Balancing protection with allowing children to benefit from online opportunities

The overall purpose of the discussion was to explore strategies and challenges related to ensuring online safety for children and youth in an increasingly digital world.

The tone of the discussion was thoughtful and constructive, with panelists and audience members offering different perspectives but generally agreeing on the importance of the issue. There was a sense of urgency about addressing online risks for young people, balanced with recognition of the complexities involved. The tone became slightly more critical when discussing the responsibilities of tech companies, but remained largely collaborative in seeking solutions.

Speakers

– Millenium Anthony: Moderator

– Keith Andere: MAG member for Kenya IGF, former coordinator for African Youth IGF, part of coordination team for Global Youth IGF

– Nikki Colaco: VP of Public Policy at Roblox

– Nirvana Lima: Brazilian researcher on digital cultures, especially kids and teens and youth

– Ponsleit (name might have been misspelled): From the Gambia NRI

– Saba Tiku Beyene: Online moderator

Additional speakers:

– Joshua: From Uganda, Internet Society chapter Uganda, developer

– William: From South Africa, works with children on online safety

– Leander: Executive director of the Five Rights Foundation

Full session report

Online Safety for Children and Youth: A Global Perspective

This panel discussion, moderated by Millennium Anthony, brought together experts from various countries to explore the challenges and potential solutions for protecting young people in the digital age. The panellists included Keith Andere, coordinator for the African Youth IGF from Kenya, Nikki Colasso, VP of Public Policy at Roblox, and Nirvana Lima, a Brazilian researcher on digital cultures focusing on kids, teens, and youth. Additional contributions came from online participants and audience members.

Key Challenges in Ensuring Online Safety

The discussion highlighted several significant challenges in protecting children and youth online:

1. Digital Literacy Gap: There is a widespread lack of awareness and digital literacy among both children and parents, exacerbated by a generational digital divide.

2. Resource Disparities: Low-income families often face issues of shared devices and inappropriate content exposure, while rural areas struggle with implementing effective online safety measures. An audience member highlighted specific challenges faced in rural India, emphasizing the global nature of these issues.

3. Cultural Differences: The need for region-specific policies due to cultural variations was emphasised, complicating efforts to establish universal standards.

4. Technological Hurdles: Implementing effective age verification systems while protecting user privacy remains a significant challenge.

5. Content Moderation: Nikki Colasso noted the challenges in addressing differences in content moderation across cultures.

Strategies for Promoting Online Safety

The panellists and contributors proposed several strategies to address these challenges:

1. Safety by Design: Nikki Colasso advocated for implementing safety principles in product development from the outset, rather than relying solely on parental controls.

2. Principle-Based Policies: There was a call for developing flexible, principle-based policies rather than prescriptive regulations to allow for adaptation across different regions.

3. Education Integration: Nirvana Lima stressed the importance of integrating media education into school curricula to build digital literacy from an early age. She also shared insights from her research on kidfluencers and the concept of prosumers, highlighting the changing landscape of children’s online engagement.

4. National Legislation: Ponsleit, an online contributor, suggested implementing Online Safety Acts at the national level, citing the UK’s Online Safety Act as an example of providing a legal framework for protection.

5. Confidence Building: Keith Andere emphasised the need to focus on building children’s confidence and digital literacy skills to navigate online risks.

6. Developer Involvement: Joshua, a contributor from Uganda, highlighted the crucial role of developers in creating safe online environments, suggesting the involvement of open source communities in developing safety features.

Stakeholder Collaboration and Responsibilities

A recurring theme throughout the discussion was the necessity of collaboration between multiple stakeholders:

1. Tech Companies: The role of corporations in proactively ensuring child safety was debated, with some calling for greater corporate responsibility. William from South Africa critiqued Roblox’s safety measures, to which Nikki Colasso responded by outlining the company’s efforts.

2. Policymakers: The need for cross-border collaboration and harmonised legal frameworks was highlighted by Keith Andere.

3. Educators: The importance of digital literacy programs in schools was widely agreed upon.

4. Parents: While parental awareness is crucial, Nikki Colasso argued that relying solely on parental controls is unrealistic.

5. Children: The importance of involving children themselves in discussions about online safety policies was emphasised.

Cultural and Legal Considerations

The discussion touched on the complexities of implementing online safety measures across diverse cultural and legal contexts:

1. Global Standards vs Local Adaptation: Keith Andere highlighted the need to adapt global standards to local contexts and legal frameworks. Leander from the Five Rights Foundation provided important context about global standards and corporate responsibility.

2. Online-Offline Continuity: Keith Andere suggested applying offline regulations to online spaces where appropriate.

3. Principle-Based Approach: To address cultural variations, there was support for adopting principle-based policies rather than prescriptive ones.

Unresolved Issues and Future Directions

Despite the productive discussion, several issues remained unresolved:

1. Age Verification: Balancing effective age verification with privacy concerns continues to be a significant challenge.

2. Global vs Local Approaches: Finding the right balance between global standards and local cultural contexts requires further exploration.

3. Resource Disparities: Addressing the gap in resources between countries for implementing online safety measures remains a concern.

4. Rural and Low-Income Access: Making online safety education accessible in rural and low-income areas needs further attention.

Conclusion

The discussion highlighted the complex, multifaceted nature of ensuring online safety for youth in the digital age. While there was general consensus on the importance of the issue and the need for collaborative approaches, differences emerged in the specific strategies proposed. The panellists and contributors emphasised the need for innovative solutions that balance protection with allowing children to benefit from online opportunities.

Moving forward, the key takeaways include the need for multi-stakeholder collaboration, the importance of both technical solutions and education initiatives, and the necessity of adaptable approaches that consider diverse cultural contexts. As online safety for children and youth continues to evolve, ongoing dialogue and cooperation between tech companies, policymakers, educators, parents, developers, and young people themselves will be crucial in developing effective and comprehensive solutions.

Session Transcript

Millenium Anthony: And here with me today, I’m joined with my panelists. So I’ll also give them an opportunity to introduce themselves. And we actually have one more speaker who is just joining us right now. Yes, I’ll start with you, Keith. Please introduce yourself, and then we’ll go around.

Keith Andere: Thank you so much, Millennium, and friends and colleagues. Good afternoon. I’m hearing myself, so I’ll just remove the mic and hope that I’m also audible. My name is Keith Andere. I am from Kenya, and I am an IGFer in the sense that I’m a MAG member for Kenya IGF, but also I’ve served previously as the coordinator for the African Youth IGF. And I’m pleased to also be part of the coordination team for the Global Youth IGF that has put together this year’s summit. So happy to be here, and looking forward to a great session.

Millenium Anthony: Thank you so much, Keith.

Nikki Colasso: I hope you can hear me. And I can also hear myself, so I’m going to take my headphones off. I’m Nikki Colasso, and I’m VP of Public Policy at Roblox, which is a gaming company that some of you may know as a children’s gaming company. But actually, we have many different ages of people that play Roblox. I am based in the US in the San Francisco Bay Area, but I’m pleased to be joined by colleagues here from the UK and also the EU. And I had the pleasure of being at IGF in Ethiopia two years ago, and I remember the youth council there. So I guess I’m also an IGFer. This is my third consecutive, and I’m thrilled to be here. So thank you.

Nirvana Lima: Thank you. Yes. Hi, everyone. Sorry I’m kind of late. I lost my badge and I need to get another. So my name is Nirvana Lima. I’m a Brazilian researcher on digital cultures, especially in kids and teens and youth. And I’m here to speak with you about my research and my work in Brazil. I have a master’s degree in communication and a solid experience in this topic. So thank you so much, Milenium, for the invitation. It’s a pleasure being with you here today.

Millenium Anthony: Thank you so much, my dear panelists. Today, we also joined with our online moderator. We have Sabah here, so she’s going to be helping out with moderating online participants. So this session, mainly, we’re going to be discussing about online safety for children and youth. So we have seen children and youth are now getting more exposed to online spaces. And online spaces are no longer safe for them. So we have seen, let’s say, we have seen, I think, like in the couple of previous months, we have seen parents shooing the big tech companies because their children are no longer safe and all that. So this is really an important discussion that we have to discuss and see the challenges that are there and strategies on how we can help now these young people to stay safe online. So we have different questions that are going to be guiding, the policy questions that are going to be guiding our discussion today. And I’ll just mention them quickly. But as I go through my panelists, I’ll ask them individual questions. And the policy questions that are going to be guiding us, one says, how can stakeholders collaborate effectively to empower parents and children in ensuring online safety? What strategies are most effective in promoting active participation in securing the digital space? And the second one says, in the dynamic digital environment, What key indicators should be considered when designing online safety programs to ensure their relevance and effectiveness in addressing emerging risks faced by children and youth? And the last policy question says, considering the diverse cultural context and legal frameworks globally, what innovative approaches can be adapted to reconcile differences and establish universal standards for online safety interventions, particularly in regions with varying levels of internet access and digital literacy? So I’ll now go to my speakers and I’ll start with you, Nikki. From your experience at platforms like Instagram and Roblox, what innovative safety tools and policies have proven effective in empowering parents and children to ensure online safety? And also, how can these approaches be adapted to different cultural contexts?

Nikki Colasso: Yeah, I wish I had a perfect answer, but I’ll tell you what I think. And I’ll speak mostly from the tech perspective since I work for a private company. I think that we are in a moment, I think it’s longer than a moment, but we’re in a place where there is acknowledgement that tech companies need to do more. And a lot of the practices that used to kind of feel good or, can you hear me? I just lost sound, okay. Have to be updated for where we are in 2024 going into 2025. And so I remember a time when we talked a lot about parental controls and empowering parents to make decisions. And there is no doubt in my mind that that is critical and that all companies, Roblox included. need to give parents these tools and they need to put them in a place where they are empowered. At the same time, I am also a parent and I look at my phone. My kids don’t have smartphones, but parents have tens and tens and tens of different apps on their phone. Even the most curious and technologically savvy parents cannot possibly navigate to each individual app, set parental controls, learn that app. I think it’s not realistic. And so, of course, those controls need to be there and we need to be providing these tools so that parents have them and they need to be dynamic and they need to work. But I think that where we are now is really looking at defaults and looking at what are the initial settings on these platforms, including Roblox, and acknowledging that we need to provide that safety net in addition to the parental controls. And that is a responsibility that tech companies have. And defaults will look different from platform to platform, but acknowledging that it can’t just be about parental controls because parents are overwhelmed. We actually have to establish these tools on the tech side. So I think to me, in terms of dynamic approach, I do think that combination of defaults and parental controls is important.

Millenium Anthony: Okay, thank you very much, Nikki. So I wanna come to you, Keith, following from whatever that Nikki has said. Do you think it’s right? Now, we have seen the safety of children and youth online is now endangered. Do you think, from your perspective, do you think it’s right to just stop these kids from accessing online platforms, I mean, the internet, or just putting some measures to control their use? What is your thought on that?

Keith Andere: Thank you so much for the question. I think I’ll speak in two fronts. One, as a general internet user, and secondly, as somebody who is coming from a global south. I think right now we are moving everything online, be it government services, be it education. Life post-pandemic has shifted and has necessitated technological use to the extent that hadn’t been there before. So for us as adults wanting to engage in sessions such as IGF, for example, here, and we have colleagues and friends who are connected online, it eases a lot of pressure on how we can engage here. If for one reason or another somebody is not here, they are still able to connect and follow this session. Rightfully, like you said, we now have an online moderator, something that perhaps before COVID, it was not anything that we would think of. Like can we even put online engagement, for example. So post-pandemic, we can’t go back to pre-pandemic era, in the sense that it’s an oxymoron for us to want to have government services online, all the kind of engagements that we have. We now have working from home kind of concepts, which before pandemic, the freelancers were being seen as somebody who’s not serious. So for children, I see that it’s zero-sum math if we want to deter them from utilizing online applications and all of these things for whatever thing that they do. How then do we make it safe? I think the principle of security is a very fundamental aspect, not just on apps, but even on internet. When you’re talking about internet, there are certain parameters that come to our mind. You know, open, free, accessible, secure, and all manner of things. So, these kids are already growing. How do we deter them from going online? Yet, we are all going online. And in the next two, three years, they’ll become adults. So, I see that we as stakeholders in whatever format that we are in, whether civil society, whether governments, whether youth, or all these people, we need to start thinking security by design. And one of the things is having the kids here, so that we are not speaking for them, but we are also listening what they are saying. So, we’ve seen women in IGF, children in, I mean, youth in IGF, but I long to see the children component within the IGF space, so that these young kids can come and speak what their challenges are, but not having me as somebody who’s on the larger bracket of youth speak about children and I can’t identify with them. So, I think, to sum up the question, we really need to protect them, but we also need to have them come and speak. So, that for me is something that I would like to see going forward, more children in IGF spaces. Thank you.

Millenium Anthony: Thank you so much, Keith. So, we have, I mean, the current discussion is basically on how now we can protect these kids and youth online. I want you, Nirvana, to please tell us what do you think, I think you did a research on digital cultures in childhood, right? And what are the most pressing challenges that kids and youth face online? So, maybe we don’t know, maybe what do you think are the most pressing challenges that kids and youth face?

Nirvana Lima: Okay, I’m going to bring a Brazilian perspective for the field. First of all, I’d like to thank Odisha. for the opportunity of being here speaking to all of you today. And I want to express one more time my gratitude to Mailenian and last but not least to my youth members from Youth Brazil. Please, please raise your hands. Okay, thank you, whom I have the pleasure of facilitating in the year of 2024. I’ll begin my answer saying that this question is quite complex. But it’s so as the phenomenon of kidfluencers that is the topic, the main topic of my research. The term digital influencer has undergone a discursive shift in Brazilian market, media and research, even academic research, especially since 2015. This shift is linked to the entry of new platforms into contact content production landscape because the concept is still evolving. Its definitions change fast. However, one thing is undeniable. Young creators are driving discussions around celebrity culture and consumerism while also remaining as vulnerable as the audiences consumer their content. Kids and teens today are prosumers just like me, just like you and all of you. But do you know the meaning of prosumer? Anyone here? Prosumer? So the term prosumer was created by Alvin Toffler in early 80s. And when he predicts that the roles of producer and consumer would increasingly blur. Nowadays, even children who are not yet literate can be the creator, creators or play a starring role on videos. photos, or vital content with either commercial or purely entertainment proposals. According to the last Tee Kids Online Brazil survey, 88% of children and adolescents aged 9 to 17 have social media profiles. On one hand, this is a milestone reflecting the growing digitalization of Brazilian society. On the other hand, we can’t ignore the risks that come with this online presence, which exposes young people to potential harm, such as personal safety, reputation, or security. The truth is that kids and teens are not digital natives, despite what some may claim. This generation learns how to navigate online through trial and error, just like everyone else. They are vulnerable to exploitation, not only of their personal data, but also within the influenced economy, which includes advertising agencies and talent agents for child celebrities. The data they generate can be used for commercial exploit and for target ads, or even to manipulate their emotions, beliefs, and opinions. For an entire generation, the internet is a double-edged sword, like we used to say in Brazil. While it offers incredible opportunities for learning and social interaction, it also exposes children and adolescents to significant risks. such as violence, pornography, cyberbullying, and misinformation. I firmly believe that initiatives aimed at educating children and teens, but educating parents as well, educating educators, actually, it’s very indispensable because it will help them become aware and responsible online users. This is an important path forward. It’s a responsibility that we as a larger community must share, ensuring their safety online.

Millenium Anthony: Thank you so much, Nirvana. I think from the discussion that we’ve just been having here, I’m getting it that online safety, like how, I mean, the ways that we can use to protect these children and youth, for them to be safe online is not a one-man work, right? It’s a stakeholder thing, right? Now, Nikki, please help us understand how do you think, what key indicators can the stakeholders prioritize when designing online safety programs to address emerging risks? For example, Nirvana has said different risks that children and youth face online. And yeah, like what key indicators should be considered by these stakeholders?

Nikki Colasso: Yes. Okay. So I think first is a question of, are there technical limitations? You asked, like, do bans work? Like if we were just to ban whatever set of technology, whether it’s social media or gaming, does that help children? And I think there are lots of different opinions on that. In Australia right now, we’re seeing that there is a social media ban for under 16s. and there’s a lot of discourse about what that means and whether that’s the right course for children. I think one question is, do technical implementations actually work? So do we actually have the ability to ban children? And very often, I think what we find is no. And for that reason, we’re seeing nine-year-olds on social media when the minimum age for a lot of these sites is 13. And so I think there is a valid argument that outright bans don’t work. And in terms of indicators, I think that what you were saying before, Keith, about safety by design in terms of working with multi-stakeholders to bake safety in at the product conception phase so that we’re not retrofitting to keep kids safe, but actually designing things with them in mind is the right path forward. And I think that much of what we build, instead of being super prescriptive, which says children can or can’t do this, or 14-year-olds can and can’t see this, or girls should or should not see that, I think it’s so dependent on the child and it’s so dependent on where they live, what access they have to technology, that it’s much better instead of being prescriptive to be principle-based and say that the policies that we write should be created in the best interest of the child and that there is a responsibility. I think the best practices would say, let’s be principle-based rather than super prescriptive because I don’t think that that works as technology changes and as innovation occurs.

Millenium Anthony: Can you hear me? Okay, I can hear myself. Okay, so I think I really liked that point. in the view, I’m keeping you have mentioned about, you know, there’s this, they call the design thinking approach that you bring the stakeholders on the tables and then you use them to understand the needs that they have, but also to innovate creative ways of solving problems. So I think that could be really a nice approach to have the kids in this space, understand their needs and the challenges that they face rather than just sitting and coming with solutions that we think maybe we just need to ban the internet from them while we, maybe there could be another way or another solution that we could use. So now I want to turn back, we have talked about, we just had this discussion here about the importance of collaboration and stuff. So what do you think are the biggest challenges in ensuring online safety for youth from your experiences in your specific regions or countries? What do you think are the biggest challenges in ensuring online safety for children and youth? Anyone, if you’re ready, you can just raise your hand and then we’ll pass the mic. Okay. Ah, we have a mic over there. So I think I saw three hands. Oh yeah, you can start. Hmm. No, please turn it on.

AUDIENCE: Is this one working? Yeah.

Millenium Anthony: Pass the mic. Hello. Hello. Am I audible now? Yes. Now we can hear. Okay. Thank you. Yeah, OK, yeah.

AUDIENCE: Sorry, I came late. But this subject is like, I work in a rural. I came from India. I work in rural communities. And we have a lot of first time internet users. And especially the targets are the youths who are not aware of how to use the internet, first of all. So when they get internet, they are so much attracted to everything that is there. So they don’t understand what’s there. And we had an incident with a girl who got abused online. And she didn’t know what to do about it. And she couldn’t tell her family, nor to any friends, but some very close person who she thought could help. And when they approached the cybercrime, they said, you need to bring proof. So the girl, she still doesn’t know what to do with the content that is available when she fell for the other person who faked her, to be her friend or something. So this is still a challenge. We still don’t know in India, especially in rural sectors, how to address these problems. And when we asked about, how safe have you kept your accounts? And when we checked with just not the girls in the community, not just the youths, but the boys also, they said, we don’t care. So somebody could actually hack your account and use your name to target somebody. So these things, these standards, like how do you talk about it? Like awareness, how do you talk about these problems? Like why? why not the girl didn’t go to her family and tell first like I have I’ve been subjected to some this this problem like how do we address so these are many challenges in India like in especially in rural areas like where still nobody wants to talk about it it’s only to the peers like okay I told my friend that’s all it’s done so even the boys needs to understand like how how to like keep their accounts safe and and teach talk about it to your friends and families it’s okay like to tell is this is happening so this is one of the incidents in and still we don’t have like proper support proper standards proper awareness system how do we do so we still don’t have a model for this.

Millenium Anthony: Wow thank you so much for your contribution I think there’s really a big work that we have to do especially in investing in training these young people like building capacities when it comes to online safety how do you use um the internet safely how do you put your social media how do you put yourself out there so that you don’t attract um maybe people that abuse people online like bad comments and all that I think it’s really something that we must invest to train our youth and children and I yes please after her I’ll take one more contribution and then we’ll see if we have a contribution from online.

AUDIENCE: Thank you I’m also from India so I’ll continue speaking um from where she left so basically um to answer the point that you know we need to sensitize more boys as well because there is a fun element that is attached to sharing any content that is uh online sharing resharing um of the content um also there is a huge generational digital divide so like uh we talk about parents we talk about educators we also talk about children educating them because with diversity uh there is the challenge of parents still catching up with technology, while children have fast-paced. That is the reason that we are unable to bridge that gap, and that can only happen once we do a lot of awareness programs. So we do one awareness program about use of gadgets, but then what happens technology advances, we need to go back and talk about something new that has come up. So it’s a vicious circle that needs to be adopted, but yes, the lack of support is of course there, which puts things on a standstill as well, or low-paced, I would say. Other point is of the shared devices. Like children probably are having their own devices, but in many parts of India, children don’t have their own devices, so they share devices of their parents. So when they start exploring devices on their parents, they also come across content which is not appropriate for them. That is one of the challenges that we face, and also in low-income groups especially, or in marginalized communities, one phone in a family is a big deal, right? And then having internet on their phones, because probably parents are watching some content which is appropriate for them, but not for the kids, but then through advertisements, through other channels, they come across content which questions them in their mind, but they’re unable to ask because of the sensitivity that we have in the cultures. To the point, I’m sorry, we’re taking too long, but yeah, to the point on having multi-stakeholder interactions, I think it is more about knowledge and experience. The ones who have expertise has knowledge, but the ones to the point you made involving children is because of the experience. So we need to have a knowledge and experience both in the same room to talk about the fact that how the policies needs to be shaped and adopted. Thank you.

Millenium Anthony: Thank you very much. I’ll take one more from here, and then I’ll move online and get back to my panelists, and then we can get back to the floor again.

AUDIENCE: I can talk?

Millenium Anthony: Yeah. I’ll come back to you.

AUDIENCE: Yeah, thank you. My name is Joshua from Uganda, from the Internet Society chapter Uganda. And I’m also a developer as in building systems. And that’s actually my point. I think one of the stakeholders we often forget is the person who actually builds the system. Lucky for today at least we have somebody from Roblox. My kids are fans. I don’t build that system though. Yeah, I know. You don’t want me building that system. Yeah, that’s a good point too. Well, guys who build the systems often get forgotten in these discussions. And the time you’re building such a system, you’re just looking at the target. I need to get this software out by this and this day. So most of the times we are saying, hey, the system should do this, should have this two-factor authentication and all these other things. But if I’m working on a deadline, trust me, I’m going to leave all that stuff out. And I think one solution to help is to involve the open source community in these discussions. Because that is one of the ways you can shorten that development time such that all these measures we are talking about, these things that should be by default, are somewhere as a repository that any developer can just pick up and use. And now we have applications across the board that are safe for all our users. So those are the guys I think we need to involve in this IGF, the open source communities.

Millenium Anthony: Thank you. Wow, thank you so much, Joshua. Back to my panelists now, coming to you, Keith. We have the global standards of online safety, the global regulations that are set. But then considering that we have different legal and culture frameworks in our specific regions and countries. countries, how can we balance now between the global standards of online safety and considering our own culture, our own legal frameworks in our specific countries? Is there like a way that we can balance this?

Keith Andere: That’s, okay, I felt like I’ve shouted at myself, huh? But that’s a very interesting question in the sense that even when we have global standards, I think for children’s safety, we might not have a universally accepted global standard. Why do I say this? Kids from my village in Kakamega, somewhere deep in the interstites of Kenya, are not as privileged as maybe kids from India. I use India in the sense that the parents are already technologically aware. And I use the word aware very deliberately because not to say that all parents in India, for example, have access to the technology. But the parents of India, for example, are more aware that this is the context through which they can adopt in technology. And we’ve seen India as a success story on how they deploy technology in all spheres of their life. And this is true also if you want to look at the global North versus the global South. And Africa is also very unique, if I was going to speak about Africa, for example, is very unique because there are other things like you’ve mentioned, legal frameworks that we do have. So you find different African countries struggling with even basic frameworks such as data protection, computer, or cybercrime, cybersecurity kind of laws. So I think we cannot adopt a very universally accepted kind of policy, but I can share a few pointers that perhaps. can guide and arrive to the same response that you’re looking for. Maybe one is, are we able to develop region specific policies? So that if we are looking at Africa, for example, then we are looking at Africa as a region and the context of Africa. And even when we do that from an Africa Union point of view, then we can scale it down. You know, that just becomes a global framework through which some cross-border kind of crimes and issues can be addressed in the context of that framework. But we really need again to break it down. So West Africa and East Africa and Southern Africa, because the needs and the cultures of West Africans and Southern Africans are different. Perhaps we also need to look at cross-border collaboration. How do we go outside of legal framework, for example, that Kenya has and look at other country and pick and harmonize this framework so that whatever is illegal in Kenya is also illegal in India, for example. So that if somebody is trying to perpetuate some crime from India, then it’s not very different from Kenya. Then we can use those kind of harmonized framework to support this cross-border collaboration and enforcement as well. Again, the issue of legal frameworks. How do we build capacity to strengthen these legal frameworks, you know, and ensure that we have a comprehensive cyber security, data protection laws that also consider local contexts? You know, if you look at META, I think one of the key things that they grapple with is local and cultural contexts of issues. What is head speech in Kenya is not head speech in. in another country. And so even the people who are now doing content moderation, when you flag something as hate speech, they might not see it as hate speech because the context is different. I think we also need to perhaps education, digital education and digital literacy. Because then what that means is that we are going to promote accessible digital education in schools. But not just for kids. How do we look at parents as well, as people who we can target for digital literacy? Because then through that awareness issues are addressed. I think resource disparities is also a big issue. There are some countries that are rich. There’s some countries that are not rich. You find different countries grappling and struggling with catastrophes such as flooding, drought, climate issues. So how then do you ask them to put X amount of money in digital literacy when they’re just trying to keep these people safe, or at least out of the calamities that they already are having? So I think one of the ways also is look at addressing the resource issues to the extent that we can support countries with limited financing and also technical resources. Because you’ve come to Kenya, everybody’s almost a geek. But then you find a country like maybe Madagascar or Mozambique, the resource that they have, cybersecurity professionals are such a small number. If you look at different reports, they say Africa has a shortage of cybersecurity professionals to the extent that they are needed up to 20,000 professionals a year. Now what does that mean if you’re now contextualizing? that in terms of resource that can, you know, develop and support this kind of issues to ensure that, you know, we are building a standard that is also localized. So these expertise are things that we should export. You know, Joshua here with all his technical expertise, that expertise that he has can be exported to Sao Tome, for example, or Cape Verde, and support the people of Cape Verde. Because if it’s just in Uganda or in Tanzania, I don’t know, how then do we take advantage of such expertise? I’ll stop at that. Thank you.

Millenium Anthony: Wow. Thank you so much. Daba, do we have any contributions from online? We have a question? Okay. So I’ll get to Nirvana and then we can move back to the question. Now to you Nirvana. What strategy, you have worked with kids, right? You have worked with on different projects on how to protect children and youth online. But so can you share with us if there are any strategies that you find effective in promoting active participation of children and parents in securing a safe digital environment?

Nirvana Lima: So over the past years, I’ve been conducting research and working on issues related to internet governance with young people, but age 80 to 25, who are the target audience of Youth Brazil program. But I had the opportunity to teach workshops on responsible internet use to young people aged 16 to 18. However, I must admit that unfortunately I haven’t. yet worked directly with children. Though, I believe in the importance of doing so as soon as possible. This is not just because of my responsibility as a researcher and a popular educator, but also for the civil society and government organizations. Children and teenagers are facing serious challenges. And as adults, us, we need to study, we need to address them. Since 2018, the World Health Organization has officially recognized digital addiction as a disorder, sounding the alarm for parents and educators about the excessive time children spend in front of screens. It’s more than clear that we need to develop and implement effective methodologies focused on media literacy for children, for adolescents, parents, and educators. In Brazil, along with other countries in the South Global, we must begin integrating media education into school curriculum. I think this is a great start. From primary to high school, being connected to internet is a reality. All of us know. But as for this and forward generations, we must ensure that they are equipped to use internet in ways that serve their best interest. So we all are responsible for them.

Millenium Anthony: All right. Thank you so much, Nympharna. I now welcome questions from the floor. If you have any questions to any of our panelists, yes, I’ll take you and… he was the first and then you and then we’ll do one from online so one two three four five five one minute each what it’s a contribution okay so please allow me to take the contribution from online first

AUDIENCE: it’s good you guys congratulations from the discussion and as you are talking i guess you have a main challenge in brazil because you’re a huge country so i guess one of the main challenges is the difference between the law enforcement between the different regions in brazil like i’m the north and the law digital enforcement for child for child and for i mean everything it’s very different from the enforcement for the south and i guess we as a federative state we don’t think in laos considering this difference you know as i said our law is federal but with this um state laws uh focusing on each difference between our regions amazon and south and i guess this is a huge problem for brazil and it must be on the discussion in every government and it’s missing but my question is for any of you any of you guys can respond me it’s what is the main challenge for the stakeholders when you are designing not to a tool for children, considering different legal frameworks around the world. There are some convergence, some main convergence, between the different law enforcements between the countries. Do you think there is something that is common between India, Brazil, United States? There is something that is common in the law? Or do you think we don’t have this? We don’t have a point in common between all the laws focused for children and youth? Is this a question?

Millenium Anthony: Yeah. Any of my panelists who is ready to respond on that in one minute, if you can take a second. Oh, we don’t have a mic.

Nikki Colasso: So I’m not a lawyer. So I thought Mike was going to run. I’m not a lawyer, so I can’t speak to commonalities in the actual law. But I think often what companies try to do is synthesize much of what is illegal in many places into what is often called their community standards or community guidelines. And that’s kind of the governing document for a service. And in that document, you’ll usually find you can’t perpetrate illegal acts. It’s kind of broadly defined to cover different geographies. You can’t use hate speech. You can’t commit fraud. These kinds of things, which tend to be common across geographies. What I think is much harder, which Keith touched on, is the speech issue. So what may be illegal to say, in France, by law, you cannot deny that the Holocaust happened. In the US, in theory, you can. We would all agree it’s hateful, but you actually can do. that. And I think it’s much harder to moderate around the speech issues, which doesn’t mean that the legal stuff isn’t difficult, but I think the moderation piece is very, very difficult. I think a lot of times the governing documents, like the community standards, community guidelines, try to find those places in common in the law.

Millenium Anthony: Thank you so much, Nikki.

Keith Andere: Just managing this tech thing. Okay. In addition to what the colleague here says, I think it’s also important to note or to remember that we cannot regulate online what we are unable to regulate offline. So what applies offline is basically what will apply online. So if, for example, we have quote unquote freedom of speech here, to the extent that we can say, generally to many things which can pass, how then do I curtail you from typing this? If I can speak to you and say certain words, then it becomes difficult just because it’s written to say that we can curtail or we can put a standard to it. So in my submission, I think it then starts to applying these from offline so that then it’s applicable online. Otherwise, if we look at it from just an online lens, it will be difficult to enforce because that is not what happens offline.

Millenium Anthony: Perfect. Thank you. Please, can we take our contribution from online and then come back on site? Yeah, thank you. Can you use less than a minute if you can? Yeah, yeah. Thank you very much.

Ponsleit Szilagyi: Thank you very much. Ponsleit Szilagyi speaking from the Gambia NRI. One of the main things I want to contribute in this session is having countries implementing an Online Safety Act as the United Kingdom has done in 2023. And that Online Safety Act also protects not only children, but adults. When you have an Online Safety Act, most of all these problems we have in navigating online safety for children and young people happens in most social media companies. And now the social media companies are now more held responsible. And yes, it might be difficult. It might take time because everybody’s doing different things in putting an act, but I believe that advocating for an Online Safety Act within the context of each countries is the way to go about it. In most of the global South, what happens where most of the children don’t really have internet access is in their schools, whether it’s in public schools or private schools. And you can also introduce forms of educational online safety, but it starts with an act. And if you look at the UK example in the uk.gov.uk, you discover that it’s very strong in protecting children. And I think that’s the way we should go forward. Thank you.

Millenium Anthony: Wow, thank you so much for the contribution. Please allow me to take the second question back there. I’m sorry, the second and then the third. Please let’s try to use less than a minute to summarize our questions. What?

AUDIENCE: Hi, can you hear me? Yes. My name is William. I’m from South Africa, and we do a lot of work with children. And in fact, what we’ve been doing, coordinated by UNICEF is a group of all entities that work on online safety to try and come together and try and build common approaches. And we’ve just last week, in fact, presented South African online guidelines for dealing with online safety. There’s an absence from that group and it’s Roblox. And Roblox internationally has come under fire quite a lot because children find it very easy to get around the measures that you’ve got in place currently to protect them. And so that comes to a point of political will. And of course, the other big one around age verification, which as I understand it is what the Australian government is saying, until and unless you can demonstrate comprehensive age verification systems, we’re gonna say no under 16. So I’d love to get your feedback on that. Thanks very much. Fascinating panel.

Millenium Anthony: You wanna respond to that Miki?

Nikki Colasso: Sure. So, I mean, I guess, do you want me to respond to your question about Roblox or in terms of Australia? Yeah, so let me start by saying that like, we are not perfect. Like Roblox is not perfect. And I don’t think any tech company is. But I do believe that we are very safety first in terms of trying to get the right outcomes and putting children at the center of what we do. And not just because it’s the right thing to do, although it’s the right thing to do. It doesn’t even make sense from a technological or business perspective to not protect children. Like our service would cease to exist if we didn’t do that. With that said, I think that there is a lot of fair or just fair, I don’t know if it’s criticism or sensitivity about the role of. of tech companies in society and what they need to come to the table to do. And I think that’s probably been building up for quite a long time. But, and I also think that, you know, companies very often will be covered in the press. But the reality is, is that, you know, we develop, we’ve actually just announced a whole set of parental tools so that children can’t talk one-on-one outside of games, right, to prevent grooming. And that’s a default, like what I talked about before. And I think we’re constantly trying to update that, but we’re also a company of 2000 people. So if you compare us to like a huge social media company, we still need to grow in scale. So I think the answer is we are not perfect, but I do think that we are consistently focused on the right outcomes. I think the question about age verification is an important one. I do think if there was a technology that could perfectly verify age, it would make sense for countries and companies to use that to the extent that they could. The problem is that those technologies are often very circumventable. So they are easily gotten around, particularly for children who are very tech savvy, or they might work very well, but they’re gonna collect a ton of biometric data about the child or the person in question. Then it’s a little bit of a, how do you rate that? Is it more important to keep them off a service and prevent access totally, but collect a lot of their private biometric information in the process? And I think that is the central question that policymakers and tech companies are struggling with. My hope is that over time, these tech solutions get better, and, but I don’t think we have a great answer to that right now. I think age verification continues to be the question that really troubles people across tech and policy circles and in the public sector.

Millenium Anthony: Well, thank you so much, Nikki. Please, let’s get.

AUDIENCE: Right, hello, can you hear me? Yes. Okay, hi, my name is Leander, I’m the executive director of the Five Rights Foundation, and we are a global NGO working on children’s rights in the digital environment with the mission to build the digital world that children and young people deserve. And we represent and work with and for children from around the world. And I have a couple of points to make. First of all, building on what people have said in the room, I think it really has to be extremely clear that this is a global problem, and that the issues cited in India could just as well be in Uganda or in the US and they certainly are. And the reason that children are having very, very similar experiences online around the world and facing the same risks and the same harms is because they are misusing exactly the same products. This gets us to the point, is there a global solution and do we need global standards? The answer is very, very clearly, yes. There are, you know, companies which are representing 25% of global GDP, who have a massive power differential between those companies and the children that we are talking about. And indeed, Nikki, I actually agree with so much of what you have said about the fact that this is not for parents to deal with, this is not about digital literacy, this is not about children, educating children to navigate an environment which is controlled by a number of companies. And it really, yes, is about safety by design, and it is about corporate responsibility. And this is exactly where we can have global standards. And to Joshua’s point, you know, this is where designing with certain basic principles and standards in mind is very, very possible. And we shouldn’t overcomplicate things. Indeed, the same applies online, as offline, as was said before. And they are products, and we do have product safety regimes, we do have privacy regimes, and these things should apply online. And so, you know, much of that is already there. There is the Convention on the Rights of the Child, there is the General Comment 25, which sets out exactly how children’s rights apply in the digital environment. We now have the Global Digital Compact. regulations, so our colleague online mentioned the Online Safety Act, there’s also the age-appropriate design code which exists in many countries and which Roblox has endorsed and was one of the first endorsers of this code and they set out indeed not prescriptive and tech neutral systems for companies to follow. So that is there, we must apply it, the big problem that remains is that I think you know colleagues who work on policy and who say the right things are very distinct from business interests and the people at the top who are making the business decisions and maybe not feeding back to the designers who are getting very specific metrics that they need to implement which are not focused around child rights and safety. Thank you.

Millenium Anthony: Thank you, Sabah please can you read for us the question online? We have less than one minute so if you have any question from the audience you can ask in the end.

Saba Tiku Beyene: So Omar Shuran is asking what do you think are the most important steps we can take to ensure that kids feel safe and supported when exploring the digital world and what are the measures?

Millenium Anthony: Anyone who can summarize in less than a minute?

Keith Andere: For us to make kids feel safe online there must be an element of trust and trust here comes from a point of confidence. Feeling is is not something that you can point at so I see that if they are confident then the aspect of feeling safe comes in and confident is not just in using. If I give my phone to a three-year-old today even though they might not have used an iPhone within no time you’ll find that they’ve figured out what is a you know App Store and they’ve gone and downloaded a game taught themselves to play a game, and they’ll return to me telling me, I can’t go past this level. And I’m like, okay, so what even is this? And that comes with literacy. Literacy not just in using, literacy in security, literacy in knowing that this is a potential threat. I think I’ve seen kids playing a lot of online games, and sometimes the people on the other side, for them, they just see as a game player or a mate. But this person you find is an adult trying to push these kids towards certain direction. Come and do this, or if you don’t do this, I’ll give you this. So how do we make them feel safe if they are not aware from a point of literacy is something that might not be achieved if we don’t look at literacy and security literacy, cybersecurity, cyber hygiene, do the do’s and don’ts, how to navigate threats. For me, it’s very easy when I see somebody trying to spam me what to do, but will my 12-year-old daughter probably do the same? Maybe not, unless, of course, I’ve shown and told them that, look, watch out for this, and when this happens, this is what you do. That’s what I think.

Millenium Anthony: Wow, thank you so much, Keith. Thank you so much to my panelists. Thank you for your contributions. Thank you to the audience. Thank you for being very interactive. We’re out of time, so thank you very much. Please connect to our panelists. They’re going to be here. So if you had a question, you can connect with them after. Thank you.

A

AUDIENCE

Speech speed

163 words per minute

Speech length

1998 words

Speech time

731 seconds

Lack of awareness and digital literacy among children and parents

Explanation

The speaker highlights that many first-time internet users, especially youth in rural communities, lack awareness of how to use the internet safely. This lack of knowledge makes them vulnerable to online risks and abuse.

Evidence

An incident where a girl was abused online and didn’t know how to handle the situation or seek help.

Major Discussion Point

Challenges in Ensuring Online Safety for Children and Youth

Agreed with

Keith Andere

Nirvana Lima

Agreed on

Importance of digital literacy and education

Shared devices and inappropriate content exposure in low-income families

Explanation

The speaker points out that in many parts of India, especially in low-income groups, children share devices with their parents. This leads to children being exposed to content that may not be appropriate for them.

Evidence

Example of one phone being shared in a family, where parents might watch content appropriate for them but not for kids.

Major Discussion Point

Challenges in Ensuring Online Safety for Children and Youth

Generational digital divide between parents and children

Explanation

The speaker highlights the challenge of a significant gap in technological knowledge between parents and children. This divide makes it difficult for parents to guide and protect their children in the digital space.

Evidence

Observation that children are fast-paced in adopting technology while parents are still catching up.

Major Discussion Point

Challenges in Ensuring Online Safety for Children and Youth

Difficulty in addressing online safety issues in rural areas

Explanation

The speaker emphasizes the challenges in addressing online safety issues in rural areas of India. There is a lack of proper support systems and awareness programs to deal with online safety problems.

Evidence

Example of a girl who was abused online and didn’t know how to seek help or report the incident.

Major Discussion Point

Challenges in Ensuring Online Safety for Children and Youth

Involving open source communities in developing safety features

Explanation

The speaker suggests involving open source communities in discussions about online safety. This could help in developing safety features that can be easily implemented by developers working on tight deadlines.

Evidence

Observation that developers often leave out safety features due to time constraints.

Major Discussion Point

Stakeholder Collaboration for Online Safety

Agreed with

Keith Andere

Nikki Colasso

Agreed on

Need for stakeholder collaboration in ensuring online safety

K

Keith Andere

Speech speed

141 words per minute

Speech length

1936 words

Speech time

822 seconds

Need for region-specific policies due to cultural differences

Explanation

Keith argues that a universally accepted global standard for children’s safety may not be feasible due to cultural and technological differences between regions. He suggests developing region-specific policies that consider local contexts.

Evidence

Comparison of technological awareness between children in rural Kenya and India.

Major Discussion Point

Cultural and Legal Considerations in Online Safety

Differed with

Nikki Colasso

Ponsleit

Differed on

Approach to online safety regulations

Promoting cross-border collaboration and harmonized legal frameworks

Explanation

Keith suggests the need for cross-border collaboration and harmonization of legal frameworks. This would help in addressing cyber crimes that transcend national boundaries and ensure consistent enforcement of online safety measures.

Major Discussion Point

Stakeholder Collaboration for Online Safety

Agreed with

Nikki Colasso

AUDIENCE

Agreed on

Need for stakeholder collaboration in ensuring online safety

Addressing resource disparities between countries

Explanation

Keith highlights the need to address resource disparities between countries in implementing online safety measures. He points out that some countries struggle with basic frameworks while others are more advanced.

Evidence

Mention of different African countries struggling with basic frameworks such as data protection and cybersecurity laws.

Major Discussion Point

Stakeholder Collaboration for Online Safety

Applying offline regulations to online spaces

Explanation

Keith argues that we cannot regulate online what we are unable to regulate offline. He suggests that principles applied in offline spaces should be extended to online environments for consistency in regulation.

Major Discussion Point

Cultural and Legal Considerations in Online Safety

Focusing on building children’s confidence and digital literacy

Explanation

Keith emphasizes the importance of building children’s confidence and digital literacy to make them feel safe online. He argues that feeling safe comes from a point of confidence, which is built through literacy in security and cyber hygiene.

Evidence

Example of a three-year-old quickly learning to use a smartphone and download games.

Major Discussion Point

Strategies for Promoting Online Safety

Agreed with

Nirvana Lima

AUDIENCE

Agreed on

Importance of digital literacy and education

Differed with

Nirvana Lima

Differed on

Focus of online safety efforts

N

Nikki Colasso

Speech speed

157 words per minute

Speech length

1514 words

Speech time

575 seconds

Implementing safety by design principles in product development

Explanation

Nikki advocates for implementing safety by design principles in product development. She suggests that safety features should be baked into products from the conception phase rather than retrofitted later.

Major Discussion Point

Strategies for Promoting Online Safety

Developing principle-based rather than prescriptive policies

Explanation

Nikki argues for developing principle-based policies rather than prescriptive ones. She suggests that this approach is more flexible and can better adapt to changing technologies and innovations.

Major Discussion Point

Strategies for Promoting Online Safety

Differed with

Keith Andere

Ponsleit

Differed on

Approach to online safety regulations

Improving age verification technologies while protecting privacy

Explanation

Nikki discusses the challenges of age verification technologies. She points out the need to balance effective age verification with protecting users’ privacy and preventing the collection of excessive biometric data.

Evidence

Mention of the debate around Australia’s social media ban for under 16s.

Major Discussion Point

Strategies for Promoting Online Safety

Balancing corporate responsibility with user empowerment

Explanation

Nikki emphasizes the need to balance corporate responsibility with user empowerment in ensuring online safety. She acknowledges that tech companies need to do more while also providing tools for parents and users.

Evidence

Example of Roblox implementing new parental tools to prevent one-on-one conversations outside of games.

Major Discussion Point

Stakeholder Collaboration for Online Safety

Agreed with

Keith Andere

AUDIENCE

Agreed on

Need for stakeholder collaboration in ensuring online safety

Addressing differences in content moderation across cultures

Explanation

Nikki highlights the challenges in content moderation across different cultures. She points out that what may be considered hate speech or illegal content in one country might be protected speech in another.

Evidence

Example of Holocaust denial being illegal in France but potentially protected speech in the US.

Major Discussion Point

Cultural and Legal Considerations in Online Safety

N

Nirvana Lima

Speech speed

105 words per minute

Speech length

840 words

Speech time

476 seconds

Excessive screen time and digital addiction among children

Explanation

Nirvana highlights the issue of excessive screen time and digital addiction among children. She points out that this has been recognized as a disorder by the World Health Organization, raising concerns for parents and educators.

Evidence

Reference to World Health Organization’s recognition of digital addiction as a disorder in 2018.

Major Discussion Point

Challenges in Ensuring Online Safety for Children and Youth

Integrating media education into school curricula

Explanation

Nirvana suggests integrating media education into school curricula as a strategy to promote online safety. She argues that this approach is necessary to equip children and adolescents with the skills to use the internet in ways that serve their best interests.

Major Discussion Point

Strategies for Promoting Online Safety

Agreed with

Keith Andere

AUDIENCE

Agreed on

Importance of digital literacy and education

Differed with

Keith Andere

Differed on

Focus of online safety efforts

P

Ponsleit

Speech speed

148 words per minute

Speech length

211 words

Speech time

85 seconds

Implementing Online Safety Acts at the national level

Explanation

Ponsleit advocates for implementing Online Safety Acts at the national level, similar to what the United Kingdom has done. He argues that such acts can help protect both children and adults online by holding social media companies more accountable.

Evidence

Reference to the UK’s Online Safety Act implemented in 2023.

Major Discussion Point

Strategies for Promoting Online Safety

Differed with

Keith Andere

Nikki Colasso

Differed on

Approach to online safety regulations

M

Millenium Anthony

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Considering diverse cultural contexts in policy development

Explanation

Millenium raises the question of how to balance global standards of online safety with diverse cultural and legal frameworks in specific regions and countries. This highlights the need for considering cultural contexts in developing online safety policies.

Major Discussion Point

Cultural and Legal Considerations in Online Safety

U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Developing global standards based on children’s rights

Explanation

The speaker argues for the development of global standards for online safety based on children’s rights. They emphasize that children around the world are facing similar risks and harms online due to the use of the same products.

Evidence

Reference to existing frameworks such as the Convention on the Rights of the Child and General Comment 25.

Major Discussion Point

Stakeholder Collaboration for Online Safety

Agreements

Agreement Points

Need for stakeholder collaboration in ensuring online safety

Keith Andere

Nikki Colasso

AUDIENCE

Promoting cross-border collaboration and harmonized legal frameworks

Balancing corporate responsibility with user empowerment

Involving open source communities in developing safety features

Speakers agree on the importance of collaboration between different stakeholders, including governments, tech companies, and open source communities, to effectively address online safety issues.

Importance of digital literacy and education

Keith Andere

Nirvana Lima

AUDIENCE

Focusing on building children’s confidence and digital literacy

Integrating media education into school curricula

Lack of awareness and digital literacy among children and parents

Speakers emphasize the need for digital literacy and education programs to empower children, parents, and educators in navigating online spaces safely.

Similar Viewpoints

Both speakers highlight the importance of considering cultural differences when developing online safety policies and content moderation practices.

Keith Andere

Nikki Colasso

Need for region-specific policies due to cultural differences

Addressing differences in content moderation across cultures

Both speakers advocate for proactive approaches to online safety, emphasizing the need for built-in safety features and global standards to protect children’s rights.

Nikki Colasso

Unknown speaker

Implementing safety by design principles in product development

Developing global standards based on children’s rights

Unexpected Consensus

Limitations of parental controls and user empowerment

Nikki Colasso

Unknown speaker

Balancing corporate responsibility with user empowerment

Developing global standards based on children’s rights

Despite representing different perspectives (tech industry and children’s rights advocacy), both speakers agree that relying solely on parental controls and user empowerment is insufficient, emphasizing the need for corporate responsibility and global standards.

Overall Assessment

Summary

The main areas of agreement include the need for stakeholder collaboration, the importance of digital literacy and education, consideration of cultural differences in policy-making, and the implementation of proactive safety measures.

Consensus level

There is a moderate level of consensus among the speakers on the key challenges and potential solutions for ensuring online safety for children and youth. This consensus suggests a growing recognition of the complexity of the issue and the need for multi-faceted approaches involving various stakeholders. However, there are still differences in emphasis and specific strategies proposed by different speakers, indicating that further dialogue and collaboration may be necessary to develop comprehensive and effective solutions.

Differences

Different Viewpoints

Approach to online safety regulations

Keith Andere

Nikki Colasso

Ponsleit

Need for region-specific policies due to cultural differences

Developing principle-based rather than prescriptive policies

Implementing Online Safety Acts at the national level

Keith argues for region-specific policies, Nikki advocates for principle-based policies, while Ponsleit suggests implementing national Online Safety Acts.

Focus of online safety efforts

Nirvana Lima

Keith Andere

Integrating media education into school curricula

Focusing on building children’s confidence and digital literacy

Nirvana emphasizes integrating media education into school curricula, while Keith focuses on building children’s confidence and digital literacy.

Unexpected Differences

Role of parents in ensuring online safety

AUDIENCE

Nikki Colasso

Lack of awareness and digital literacy among children and parents

Balancing corporate responsibility with user empowerment

While the audience member emphasizes parental awareness, Nikki unexpectedly argues that relying solely on parental controls is unrealistic, shifting more responsibility to tech companies.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to online safety regulations, the focus of online safety efforts, and the role of different stakeholders in ensuring online safety.

difference_level

The level of disagreement is moderate. While there are differing views on specific approaches, there is a general consensus on the importance of online safety for children and youth. These differences highlight the complexity of the issue and the need for a multi-faceted approach involving various stakeholders.

Partial Agreements

Partial Agreements

Both agree on the need for tailored approaches to online safety, but Nikki focuses on product development while Keith emphasizes regional policy differences.

Nikki Colasso

Keith Andere

Implementing safety by design principles in product development

Need for region-specific policies due to cultural differences

Similar Viewpoints

Both speakers highlight the importance of considering cultural differences when developing online safety policies and content moderation practices.

Keith Andere

Nikki Colasso

Need for region-specific policies due to cultural differences

Addressing differences in content moderation across cultures

Both speakers advocate for proactive approaches to online safety, emphasizing the need for built-in safety features and global standards to protect children’s rights.

Nikki Colasso

Unknown speaker

Implementing safety by design principles in product development

Developing global standards based on children’s rights

Takeaways

Key Takeaways

Online safety for children and youth is a complex global issue requiring multi-stakeholder collaboration

There is a need for both technical solutions (safety by design) and education/awareness initiatives

Cultural contexts and legal frameworks vary, necessitating adaptable approaches

Corporate responsibility of tech companies is crucial, but user empowerment is also important

Age verification remains a significant challenge, balancing effectiveness with privacy concerns

Resolutions and Action Items

Integrate media education and digital literacy into school curricula

Develop and implement national Online Safety Acts

Improve collaboration between tech companies, policymakers, and child rights organizations

Focus on designing products with safety principles from the start

Unresolved Issues

How to effectively implement age verification without compromising privacy

Balancing global standards with local cultural and legal contexts

Addressing resource disparities between countries in implementing online safety measures

How to make online safety education accessible in rural and low-income areas

Suggested Compromises

Adopting principle-based policies rather than prescriptive ones to allow for cultural adaptations

Balancing parental controls with default safety settings on platforms

Finding a middle ground between outright bans and unrestricted access for children

Thought Provoking Comments

I think that where we are now is really looking at defaults and looking at what are the initial settings on these platforms, including Roblox, and acknowledging that we need to provide that safety net in addition to the parental controls.

speaker

Nikki Colasso

reason

This comment shifts the focus from relying solely on parental controls to emphasizing the responsibility of tech companies in providing safe default settings.

impact

It led to further discussion about the role of tech companies in ensuring online safety and the limitations of relying only on parental controls.

So, I see that we as stakeholders in whatever format that we are in, whether civil society, whether governments, whether youth, or all these people, we need to start thinking security by design. And one of the things is having the kids here, so that we are not speaking for them, but we are also listening what they are saying.

speaker

Keith Andere

reason

This comment introduces the important concept of ‘security by design’ and emphasizes the need to include children’s voices in discussions about online safety.

impact

It broadened the conversation to consider a more holistic approach to online safety, including the direct involvement of children in policy-making.

The truth is that kids and teens are not digital natives, despite what some may claim. This generation learns how to navigate online through trial and error, just like everyone else. They are vulnerable to exploitation, not only of their personal data, but also within the influenced economy, which includes advertising agencies and talent agents for child celebrities.

speaker

Nirvana Lima

reason

This comment challenges the common assumption that children are inherently tech-savvy and highlights their vulnerabilities in the digital space.

impact

It shifted the discussion towards recognizing the need for comprehensive digital education and protection for children, rather than assuming they can navigate online spaces safely on their own.

Well, guys who build the systems often get forgotten in these discussions. And the time you’re building such a system, you’re just looking at the target. I need to get this software out by this and this day. So most of the times we are saying, hey, the system should do this, should have this two-factor authentication and all these other things. But if I’m working on a deadline, trust me, I’m going to leave all that stuff out.

speaker

Joshua (audience member)

reason

This comment brings attention to an often overlooked stakeholder – the developers – and highlights the practical challenges in implementing safety features.

impact

It introduced a new perspective on the challenges of implementing online safety measures and led to discussion about involving open source communities in developing safety solutions.

I think age verification continues to be the question that really troubles people across tech and policy circles and in the public sector.

speaker

Nikki Colasso

reason

This comment highlights a critical challenge in implementing online safety measures for children – the difficulty of effective age verification.

impact

It sparked further discussion about the complexities of balancing privacy concerns with safety measures, and the limitations of current technological solutions.

Overall Assessment

These key comments shaped the discussion by broadening the scope of stakeholders involved in online safety (from parents to tech companies, policymakers, developers, and children themselves), highlighting the complexities and challenges in implementing effective safety measures, and emphasizing the need for a more holistic, collaborative approach to online safety. The discussion evolved from focusing on parental controls to considering systemic changes in how online platforms are designed and regulated, while also recognizing the importance of digital literacy and education.

Follow-up Questions

How can we develop region-specific policies for online safety that consider local contexts?

speaker

Keith Andere

explanation

This is important to address the unique challenges and cultural differences in various regions, especially in Africa.

How can we address resource disparities between countries in implementing online safety measures?

speaker

Keith Andere

explanation

This is crucial to ensure that countries with limited financial and technical resources can still protect their children online.

How can we integrate media education into school curricula from primary to high school?

speaker

Nirvana Lima

explanation

This is important to equip children with the skills to use the internet safely from an early age.

What are the commonalities in laws focused on children and youth across different countries?

speaker

Audience member

explanation

Understanding these commonalities could help in developing more universal approaches to online safety.

How can we improve age verification technologies without compromising user privacy?

speaker

Nikki Colasso

explanation

This is important to effectively implement age restrictions while protecting users’ personal data.

How can we better align corporate policies with child rights and safety principles?

speaker

Leander (audience member)

explanation

This is crucial to ensure that business interests do not override child safety concerns in tech companies.

What are the most effective ways to build digital literacy and cybersecurity awareness among children?

speaker

Keith Andere

explanation

This is important to empower children to navigate online threats and use the internet safely.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #164 Strengthening content moderation through expert input

WS #164 Strengthening content moderation through expert input

Session at a Glance

Summary

This discussion focused on how social media platforms can engage with external stakeholders, particularly academics and human rights experts, to improve content moderation policies. The panel featured representatives from Meta, academia, and human rights organizations.

Jeffrey Howard argued that academics should take an “educative” rather than “activist” approach when advising platforms, providing frameworks and insights rather than pushing specific policy positions. Conor Sanchez from Meta described their process for consulting experts on policy development, giving examples of how external input shaped policies on crisis situations, functional identification, and human smuggling content.

Tomiwa Ilori emphasized the importance of meaningful collaboration with local experts and institutions, centering victims’ experiences, and adopting a bottom-up governance approach. He also stressed the need for platforms to be transparent about how they apply expert input and to proactively address human rights concerns.

Participants discussed challenges around language capacity and cultural competence in content moderation. Meta representatives highlighted their investments in multilingual moderation and partnerships with local NGOs, while acknowledging ongoing difficulties.

The discussion underscored the complexity of content moderation decisions, with stakeholders often disagreeing on optimal policies. Speakers emphasized the importance of sustained engagement with diverse experts, iterative policy development, and balancing different perspectives. Overall, the panel illustrated the intricate process of incorporating external expertise into platform governance while retaining ultimate decision-making responsibility.

Keypoints

Major discussion points:

– The role of academics in engaging with social media platforms on content moderation policies

– How Meta conducts consultations with external stakeholders and experts to inform policy decisions

– Ways platforms can learn from human rights experts to ensure rights-centered content moderation

– Challenges around language, cultural context, and capacity in global content moderation efforts

– The process of making difficult policy decisions based on stakeholder input

Overall purpose:

The goal of this discussion was to explore how social media platforms like Meta engage with external experts, particularly academics and human rights specialists, to develop content moderation policies that are effective, ethical, and rights-respecting on a global scale.

Tone:

The tone was largely informative and collaborative, with speakers sharing insights from their experiences working on these issues. There was an emphasis on the complexity of the challenges and the need for ongoing dialogue and iterative processes. The tone remained consistent throughout, maintaining a constructive and solution-oriented approach to discussing difficult policy questions.

Speakers

– Tomiwa Ilori: Advisor for BD Tech Africa Project by the UN Human Rights

– Conor Sanchez: Content policy team and stakeholder engagement team at Meta

– Jeffrey Howard: Academic researcher, ethicist

– Ghandi Emilar: Moderator

Additional speakers:

– Naomi Schiffman: From the Oversight Board (mentioned but not present)

–  Mike Walton: UNHCR

– Adnan: Participant from Iraq

Full session report

Expanded Summary of Discussion on Social Media Platform Engagement with External Stakeholders

This discussion explored how social media platforms, particularly Meta, engage with external stakeholders such as academics and human rights experts to improve content moderation policies. The panel featured representatives from Meta, academia, and human rights organisations, focusing on the complexities of developing effective, ethical, and rights-respecting content moderation policies on a global scale.

Role of Academics in Platform Policy Development

Jeffrey Howard, an academic researcher and ethicist, argued for an “educative” rather than “activist” approach for academics engaging with social media platforms. He posited that academics should provide frameworks and insights rather than pushing specific policy positions, preserving their distinctive role and differentiating their input from that of other stakeholders.

Howard contended that academics should not view themselves as voting stakeholders in platform decisions, as they are not directly affected by policies in the same way as other constituents. Instead, he suggested that academic engagement with platforms can be intellectually generative, offering unique perspectives and analytical frameworks to inform policy development.

Meta’s Stakeholder Engagement Process and Content Moderation Efforts

Conor Sanchez, representing Meta’s content policy and stakeholder engagement teams, detailed the company’s process for consulting experts on policy development. Meta’s approach is based on principles of inclusivity, expertise, and transparency. Sanchez provided examples of how external input has shaped policies on crisis situations, functional identification, and human smuggling content.

Sanchez highlighted Meta’s significant investment in safety and security, with over 40,000 people working on these issues. The company’s content moderation efforts cover more than 70 languages, demonstrating the scale and complexity of their operations. Sanchez also mentioned Meta’s Trusted Partner Program, which facilitates collaboration with local NGOs and experts.

The human smuggling content policy was discussed as a prime example of the complexity in policy-making. Sanchez explained how Meta had to balance humanitarian concerns with the need to prevent exploitation, resulting in a nuanced policy that allows certain types of content while prohibiting others.

Human Rights-Centred Approach to Content Moderation

Tomiwa Ilori, an advisor for the BD Tech Africa Project by the UN Human Rights, emphasised the importance of a human rights-centred approach to content moderation. He advocated for meaningful collaboration with credible human rights institutions, particularly those with local expertise and “boots on the ground” in specific contexts. Ilori also highlighted the relevance of the UN guiding principles on business and human rights in this context.

Ilori stressed the need to centre victims’ experiences and adopt a bottom-up governance approach. He argued for increased access to platform data for independent research, especially in underserved contexts such as the majority world. This approach, Ilori contended, would provide more contextual nuances and understanding of issues on the ground, informing how platforms can learn from their impact in diverse settings.

Challenges in Content Moderation

The discussion highlighted several significant challenges in global content moderation efforts

1. Language and Cultural Differences: Sanchez acknowledged the difficulties in moderating content across diverse linguistic and cultural contexts. Despite Meta’s investment in multilingual moderation, challenges persist, as evidenced by an audience question from Iraq about the lack of Arabic-speaking content moderators.

2. Capacity and Resource Constraints: Mike Walton from the UN Refugee Agency raised concerns about the capacity to support content moderation across a wide breadth of languages.

3. Accessibility for Local Stakeholders: An audience member from Iraq, Adnan, pointed out the difficulty some local researchers and NGOs face in reaching Meta experts to engage on issues or ask questions.

4. Sustaining Engagement: Gandhi Emilar, the moderator, highlighted the challenge of sustaining ongoing engagement with academics and experts over time. The importance of relationship-building and long-term collaboration was emphasised as crucial for effective policy development.

Areas of Agreement and Disagreement

The speakers largely agreed on the importance of diverse stakeholder engagement and the need for sustained, ongoing collaboration between platforms and experts. There was also consensus on the significant challenges faced in content moderation across different languages and cultures.

However, some differences emerged in approaches to stakeholder engagement. While Jeffrey Howard advocated for an educative role for academics, Tomiwa Ilori emphasised the importance of including victims’ voices and lived experiences in policy development. Similarly, while Conor Sanchez focused on Meta’s existing process of consulting various experts, Ilori advocated for a more bottom-up approach emphasising local context and actors.

Conclusion and Future Directions

The discussion underscored the complexity of incorporating external expertise into platform governance while retaining ultimate decision-making responsibility. It highlighted the need for ongoing dialogue and iterative processes in developing content moderation policies. Sanchez emphasised that policy-making is an ongoing process, with continuous refinement based on new information and stakeholder input.

Several unresolved issues and potential areas for further exploration emerged, including:

1. Effectively scaling bottom-up content governance approaches

2. Balancing conflicting stakeholder perspectives in policy decisions

3. Improving accessibility for local stakeholders to engage with platforms

4. Sustaining long-term engagement with academics and experts

5. Increasing platform capacity to support content moderation across diverse languages

6. Exploring the potential role of AI in multilingual content moderation

7. Establishing best practices for imported content moderation labels

8. Finding ways for platforms to support objective institutional research without compromising independence or credibility

These points suggest a rich agenda for future discussions and research on the intersection of social media governance, content moderation, and human rights.

Session Transcript

Ghandi Emilar: panelists online, three speakers online, if you can just quickly introduce yourselves.

Tomiwa Ilori: Okay. Hi, my name is Tomiwa Ilari. I’m currently an advisor for BD Tech Africa Project by the UN Human Rights. Thank you for having me.

Ghandi Emilar: Tomiwa, when you speak next, please increase your volume. Can we go to Kana, please?

Connor Sanchez: Yes, thank you. Hi, everybody. My name is Connor Sanchez. And I am with Metta. I’m on the content policy team and specifically on the stakeholder engagement team. Pleased to be with you all today. I’m sitting in California.

Ghandi Emilar: Thank you so much, Kana. And we have our third speaker online. Maybe when she joins us, we can ask her to introduce herself. So quickly just going through some of the challenges that we face as we do external stakeholder engagement at Metta. The first one, which I think most of you as well in your work first is really identifying the experts. Who are these experts? How do we even define what expertise is? Can we look at lived experiences? Can we look at the impacted ones? Who are the potentially impacted? Who are the vulnerable? Who are the underrepresented groups? So identifying experts in itself is a challenge. The second one is really when we identify the experts, what are some of the, you know, how do we manage conflicting interests? And how do we manage the conflict in the work that we’re doing? And what are some of the, you know, how do we manage conflicting interests within the stakeholder maps that we have? What are the agendas that they have that can influence input and objectivity on our policies, on our product policies, on our content policies? But beyond that, beyond just identifying the experts, really as I think, acknowledging that with experts, there’s a spectrum of experts, it’s not just one type of experts, it’s not just academics, or civil society groups that I’m seeing in the room, and also, also online. The third one is the different or, you know, the power dynamics, not all NGOs are the same, not all stakeholders are the same. You know, different stakeholders have different, you know, levels of influence within the stakeholder groups themselves and between the different stakeholder groups. How do we also, how do we also communicate complex information? I’m happy to see, you know, Yopun from Diplo, as a former Diplo ambassador, I know you don’t have your headphones. It’s important, I think we have benefited, personally I’ve benefited from the capacity building programs that the organization has run. And for us as META, it’s important, I think, to acknowledge that the stakeholders that we are engaging, not all of them, they might have lived experiences, they might be experts in their fields, but not everyone understand our policies. So we have to really work hard and ensure that before we communicate that complex information or before we communicate any of the policy changes, we engage in capacity building. Opportunities, I think there are many, and this panel will look at some of those opportunities. You know, access to specialized knowledge, we don’t want this to just be an extractive process, we want it to be mutually beneficial, not only to us, but also to the experts that we are speaking to. It improves our policies, that goes without saying. It improves our policies, not only the substance, but the process itself and the credibility of the work that we are doing. Transparency, I think is also another opportunity. It sounds like a very easy concept, but obviously not, because with transparency comes accountability, and that’s something I think that we need to talk about. And also building trust. We know that there’s a trust deficit between us and stakeholders. Do we need intermediaries to help us build that trust, or is this something that we can work on? And we know that building trust is not a sprint, but a marathon that we need to ensure that we are in for the long haul. I will just end here, but I think opportunities, we can also talk a lot more about what we can gain or what we can get from the process itself. I think moving over, should I start with you? What are some of the issues? Your experiences, I think, working with META in terms of stakeholder engagement, and then we can go into specific questions.

Jeffrey Howard: That sounds great. So I’ve been given a brief to speak for about eight to 10 minutes about my experience. And I’m going to be thinking in particular about the role of academics in this process. So consider just some of the questions that bedevil policymakers at platforms like META. Should platforms restrict veiled threats of violence or only explicit threats of violence? Should rules against inciting or praising violence be modified to include a carve-out for speech that advocates or praises justified self-defense? When should graphic violence depicting real-world, graphic content rather, depicting real-world violence be permitted for awareness-raising purposes? When should otherwise violating content be permitted on grounds of newsworthiness? For example, because the speaker is an important politician. What kinds of violations should result in permanent bans from a platform? And what kind should result in temporary suspensions? How can platforms better monitor and mitigate suspicious conduct by users in order to prevent abusive behavior before it happens? So these are just some of the topics about which I’ve engaged with social media platforms over the years in my work as an academic researcher. I’ve engaged principally with various teams within META, also teams within the Oversight Board, and policymakers throughout the UK and EU. And the thing about the questions I just listed that I want to call your attention to is that they’re not empirical questions that can be answered with social science. They’re normative questions about how to strike the balance between different ethical values when they come into conflict. Now, the academic discipline of ethics is entirely dedicated to exactly that issue. And so my role as an ethicist is to bring the tools of ethics conceived widely to bear on the proper governance of online speech and behavior. And what I want to do in the next couple minutes is to sketch two alternative theories of the proper role of academics in undertaking this kind of work, tracing some of their implications for how we should engage with platforms. So the first conception I’ll discuss is what I’ll call the activist conception, and I think this is really common. Now, on this view, the academic has already made up his or her mind about what the right answer is on a particular issue and sees her role as that of pressuring or persuading or lobbying the platform to adopt her view. So consider that question I mentioned about whether there should be a self-defense carve-out to the policy prohibiting advocacy of violence. So on this approach, the academics already made up their mind about whether it’s yes or no, and the goal is simply to persuade platforms to go their way. Usually, academics who follow this approach have already written an academic paper publishing exactly the view that they hope to defend, and then they want to be able to show that that paper has had impact for professional incentive reasons. So they’re really activists for their own research. Now, I think this is a really common way of academics to engage, and I think it’s completely misguided. I think it’s the wrong way for academics to engage. I think we should reject the activist conception of the role of academics and stakeholder engagement, and I think we should reject it because it diminishes the distinctive role that I think academics can play in this process because it eliminates the distinction between the role academics can play and the role that other stakeholders can play. Now, if you work for an NGO dedicated to fighting violence against women and girls or an organization dedicated to children’s mental health, I think the activist conception makes complete sense. You’ve figured out what policy best serves the needs of those you represent, and you’re going to the wall for those people to advocate for that policy. And so the activist view flows from these organizations’ purpose. But I’d argue that the distinctive role of an academic isn’t to be an activist. It’s something else, and that leads me to the second view, which is the one I’ll defend, and for lack of a better term, I’ll call it the educative view. And the idea here is that the role of the academic is to just educate the audience about the relevant cutting-edge academic research that bears on whatever the topic is under discussion. And in this way, it draws on the way academics ideally already teach their classes, which is to inform students about the range of research pertinent to a particular topic. So when I teach a class in London on the ethics of counter-terrorism policy or the ethics of crime and punishment, I’m not just teaching my own preferred views in those various controversies, I teach the most reasonable arguments on each side of an issue so that students are empowered to make up their own minds. Likewise, for my colleagues in empirical political science, when they’re teaching, for example, the causes of political polarization, the professor doesn’t just teach students his own favorite explanation that he’s published on in a recent article in the American Political Science Review. The right way to teach a class on that topic would be to identify the range of potential causes in the academic literature, pointing out the evidence for and against. Now, he might also flag that he favors a particular view, but his goal isn’t to ram his preferred theory into students’ brains, it’s to empower them with frameworks and insights so that they can make up their own minds. And my thought for you today is that that educative conception should guide academics in how they engage with platforms and other decision makers. Our role isn’t just to tell platforms what we think the right answer is as we see it, as if platforms were counting votes among stakeholders. And by the way, even if platforms were counting votes among stakeholders, it’s not clear academics should get a vote since we’re not really stakeholders, we’re not particularly affected by policies in the way particular constituents are. Our input is solicited because we have knowledge that’s relevant to their decision. Our role is to give platforms insights and frameworks so that they can make up their own minds. So let me make that just a little more concrete for you before I finish. So when I first engaged with Meta on the topic of violent threats and whether veiled threats should be restricted, I saw my role as getting them up to date with philosophical theories about what threats are, about how they function, about what harm they can cause, why speakers might have a moral duty to refrain from threatening language. What legitimate role sarcastic or hyperbolic threats might play in valuable self-expression. I also saw my role as informing them about theories from legal philosophy about what to do in tricky cases where all the candidate rules in a given policy area are either under-inclusive or over-inclusive, which I think happens quite a lot in the content moderation space. Likewise, when my team presents public comments to the oversight board, we of course indicate what result we think the oversight board should reach, but that’s much less important than the framework of arguments we offer to reach that conclusion. So for example, one central critique of deploying international human rights norms for content moderation is that these norms fail to offer adequate guidance, they’re just too indeterminate. But those who make this critique in the literature almost always overlook the fact that there’s a huge amount of cutting edge philosophical work on principles like necessity and proportionality, which I think can be really, really helpful in giving guidance to content moderation decision makers. And so part of my role is to help decision makers within platforms learn about that work. Now, wrapping up now, I’d like to emphasize that the case for the educative model is bolstered by the obvious fact that experts disagree about what to do. And so academics simply cheerleading for one side of the argument is not particularly helpful. The role of academics is to supply platforms with the insights they need to exercise platform’s own judgment about what to do. And I think judgment on ethical questions is essential. If I were to tell you that I was opposed to the death penalty and you asked me why, and I said, well, I asked some ethics professors and they told me they were opposed for it and I believed them, that would be an intellectually and morally unserious set of reasons for having that view. We all are responsible for making our own judgment about what’s right and wrong. And while ethicists can help us think through the arguments, the judgment about which argument is most convincing must ultimately be ours. And that goes for a platform too. Platforms like Meta can consult experts, but ultimately it’s their responsibility. to make a judgment about what to do. Last comment I’ll make is just many academics are reluctant to engage with decision makers in this space. And I think that’s a huge mistake because engaging with platforms and other decision makers like Oversight Board is hugely intellectually generative. It can help us identify new topics to write and think about. And it can also give us an opportunity to make a positive practical difference through our work. So that’s how I see the role of academics in engaging with platforms. Thanks.

Ghandi Emilar: Thank you so much, Jeff. This is really, really useful. I think one of your posts that I took here is intellectually and morally unserious views. I think I’ll use it moving forward. But you really put forward, I think, a compelling argument on why academics should engage in these spaces. And I’m sure there’s a lot of people have questions for you. But if we can just move on to other speakers and we get back to you. Yeah. Now I want to move on to Connor who leads our external engagement and who is the brains behind, with Jeff as well, behind this workshop. For him to take us through some of the case studies that show how our engagements have impacted policy decisions, our engagements with academics. So Connor, over to you. Could we put Connor’s screen? Oh, there it is. Great, everyone can see it now. Super.

Connor Sanchez: Wonderful. Yes, thank you so much. Can you hear me okay?

Ghandi Emilar: Yes, we can hear you.

Connor Sanchez: Great. Thank you so much, Amalar. And thanks, Jeff, for those first set of comments and provocation for this discussion. For everybody joining again, my name is Connor Sanchez. I’m on the stakeholder engagement team here at Meta. And I’m going to build off of Jeff’s remarks just to briefly share a bit about how we carry out consultations with external stakeholders, including academia, as well as independent researchers. We engage these experts for a variety of reasons and on a wide variety of topics. So I think this will give you a taste for how that process runs and how we take those consultations and the insights they share into account as we work through a particular policy. So just backing up for a second, just for those who may be unaware, our content policy team is the team that’s in charge of our community standards. The community standards, at the simplest level, are rules to make our platforms a space where people feel empowered, where they feel safe to communicate. And importantly, these standards are based on feedback, feedback we’ve received from a wide variety of individuals who use our platform, but also the advice of experts. And I have a few case studies that I think kind of exhibit exactly how these consultations have had an impact on our policy. An important detail about our community standards, these are global, they apply to everyone around the world, and we’ve become increasingly transparent about where we draw the line on particular issues. And laying out these policies in detail allows us to have a more productive dialogue with stakeholders on how and where our policies can improve. As MLR mentioned, we do a lot of capacity building. We realize that not everybody is extremely savvy about exactly how our rules work or how our enforcement works, so we also do a lot of… education to make sure that people understand kind of where the status quo is and why we’ve drawn the line in certain areas, even as we seek their feedback on improving and evolving our community standards. So as you can see, this is a long list of what can be found in our Transparency Center. It covers quite a bit. This contains everything from hate speech to violent and graphic content to adult nudity and bullying on our platforms. The consequences for violating our community standards vary depending on the severity of the violation and the person’s history on the platform. So if you violate any one of these rules, that receives different enforcement mechanisms, and that in and of itself is something that we seek feedback on. What is the proportional response to somebody who violates our rule? What happens if it’s violated twice, three times, or seven times? At what point does that person—we want people to learn about our rules. We want them to get better and come back and be a responsible community member. And so at what stage is that appropriate for our enforcement mechanisms? And just to give you a sense of how we involve our experts into our policy development process, we really bring them into a very robust process of how we’re developing a policy. We create an outreach strategy to make sure that we are including a wide range of stakeholders, and then we carry out that outreach. Ultimately, as Jeff said, the decision sits with us. We take all of that, everything that we’ve heard from our consultations, and we provide that to our internal teams, to leadership, and we make a policy recommendation at what’s called our policy forum. This is sort of the preeminent space within the company where we consider some of the biggest questions that are plaguing our community standards and make a decision on the direction that we wanna go in. In terms of who we engage, this is the question I think I get the most, is how do you decide who to engage with? How do you find relevant experts? How do we make sure that some vulnerable groups or groups that haven’t been heard are being heard in the process? There’s no simple formula for doing this or how we would respond to this, but we have developed a structure and a methodology that helps guide us as we reach out externally. So in terms of who we engage with, first, we can’t meaningfully engage with billions of people, though that is certainly our stakeholder base. It includes billions of people. So we seek out organizations that represent the interests of others. We also really look for expertise in particular fields, and these don’t have to be experts in content moderation or content enforcement or even internet governance or platform governance, but really they could be experts in irregular migration, in psychology. All of those things can really be informative for our policy. And then in terms of the categories of stakeholders, we’re looking at NGOs, we’re looking at academic researchers, human rights experts. They can also be people with lived experiences who are on our platforms, using our tools in certain ways. And in terms of guiding who we engage, we really have sort of. of three principles or values that we look for, inclusivity, expertise, and transparency, and making sure that we know that we’re building that trust with the stakeholder base as we speak with them. So jumping into a few examples of how this has actually played a part in our policy development process. In 2022, we published our, what’s called our crisis policy protocol. And what this did was codify our content policy responses to crisis situations. The framework we aim to build would assess crisis situations that may require a specific policy response. And so we explored how to strengthen our existing procedures and include new components such as a certain criteria for entry and exit into a crisis designation. So as we developed this, we sought consultations with global experts who had backgrounds in things like national security and international relations, in humanitarian response, conflict and atrocity prevention, human rights experts. And in these consultations, stakeholders and the experts that we spoke to really helped surface key factors that should determine, that would be used to determine whether a crisis threshold has been met. And so this included, if there were certain political events or there were large demonstrations in the street, certain states of exception or policies that were put into place. All of these things were based on the experience and the expertise from the experts that we consulted and really informed the criteria that we continue to use. this day. Another example is our functional identification process. So this policy focused on how we treat content that could identify individuals beyond explicit factors such as a person’s name or an image, which obviously we already have, we already had policies for if somebody’s name was shared in a certain context or the image and that posed a risk to them, then we would remove that content. But functional identification were more subtle factors that were information that was being shared about an individual without naming them, but that could still result in their being identified and they could put at risk as a result of that identification. So the expertise that we sought with this policy development included privacy and data security experts, journalists who are often publishing the names of individuals in their stories, who sometimes may need to remain anonymous. And so from there, we’re really drawing on decades, if not centuries of experience of individuals who have grappled with this question before of what details to provide in a publication that will be read by many, many people and therefore the types of guidelines that they need to be putting in place to protect those identities. We also spoke with a global women’s safety expert advisor group that we manage. This includes various non-profit leaders, activists, and academic experts who could focus on the safety of women on and offline. And so this stakeholder input Really, the engagements helped our team develop a policy that upon escalation allows us to consider additional factors beyond just name and image, including if somebody’s age and their ethnicity or their distinctive clothing, if all three of those in combination are published online, and we have signal from a local NGO that says that this could put somebody at risk, then that would allow us to remove that content based on our new policy. And a last example of how expert input played a role in our policies. So in 2022, we developed a policy on how to treat content soliciting human smuggling services. So our policies at that time, this was under our human exploitation policies, distinguished human smuggling from human trafficking, recognizing that human smuggling was a crime against the state and human trafficking as a crime against a person. What we wanted to tease out with experts was really figuring out what are the risks posed to people who solicit this type of content online? What are the risks associated of leaving this content up? And what are the risks associated with removing this content? And so we heard a wide variety of different insights from the stakeholders that we spoke with. The experts that we spoke with included experts who work at international organizations that are focused on migration, refugee protection, and organized crime. It also included academics who focus on irregular migration, human smuggling, refugee and asylee rights. criminologists. We also spoke with former border enforcement officials, people who have worked at borders around the world, and we really drew on this expertise to figure out where we should draw the line on this policy. They highlighted the risks posed to individuals, especially vulnerable individuals who solicit this content. They also highlighted the risk that if we were to remove this, what this would mean for somebody who may be in a very vulnerable position where they are escaping conflict, oppression, or otherwise unsafe conditions in their countries of origin. And ultimately, this led us to adopt a policy that minimized the risk of removing these types of posts by providing a safety page with information on immigration. So we would remove the solicitation of human smuggling services, but we would also provide a safety page for that individual who may be requesting that. And in developing that safety page, we also consulted experts to determine what information would be most impactful for this vulnerable population. Great, thank you so much, and that concludes my remarks. I’ll pass it back to Anilaj.

AUDIENCE: Thank you very much, Emila. Can you hear me clearly? Can everyone hear me? Before I go on? Yes. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Before I go on? Yes. Okay. Okay. I can hear you. Thank you again. Quickly to my question. My understanding of the question is it can be subdivided into two broad areas. The first one is how can. The second one is how should platforms learn from human rights experts to ensure a rights-centered model for content moderation? Like the speakers before me have said, there is usually really no one-size-fits-all because, for example, in the context of Meta and other major platforms, they operate in very, very many contexts, including very complex contexts. Saying this has to be the solution is going to be very problematic. So my understanding that some of the things that I think platforms can learn, of course, also based on my interaction with platforms like Meta in the past, is number one, for example, is ensuring meaningful collaboration. And what do I mean by meaningful collaboration? This involves, for example, increasing collaboration with established and credible human rights institutions and organizations to identify human rights. This also involves, for example, the devolution of focus on Western institutions who, you know, in quote, pretend to work on content moderation issues, especially in contexts that they do not have expertise in. For example, it involves identifying and working directly with these institutions that have boots on the ground regarding content moderation in this context. This could help identify specific pinpoints for platforms and these actors, and collaborating with these institutions and organizations to think through possible solutions. And I think this also has been mentioned earlier by both Connor and Jeff. Number two is centering victims. What I mean by that is it should involve broadening the scope of human rights expertise to include victim-centered feedback on the impact platforms have, especially on vulnerable persons. When we think of experts, I think we often miss out on centering victims whose experience are usually the focus of most engagements. One key way of learning from these experts is also to focus on including the voices of these victims that are impacted by these activities, who may or may not be experts in content moderation and governance but have lived experiences. And a third one is adopting a bottom-up content governance approach. What I mean by this is working with key actors and experts in specific domestic contexts, such as national human rights institutions, civil society, and academics. This provides more contextual nuances and understanding of the issues on the ground, how these actors are currently thinking about them, and how exactly platforms can learn from their impact on the ground. A similar example was given earlier by Connor regarding the crisis policy protocol that sort of revealed certain factors to consider in determining what qualifies as crisis. A fourth way that platform can learn from MRIs experts is increasing more access to platform data for independent research, especially in underserved contexts such as the majority world. There’s an increasing need to understand how platforms shape critical aspects of human rights challenges today. But the tools in reaching such understanding, such as the raw platform data that could point to possible solutions to these challenges, are unavailable for analysis by most majority world researchers. Lastly, another way that platforms can learn is identifying with the trolls of existing resources and platforms out there. And this includes both technical and non-technical outputs developed by international organizations such as the UN, academic institutions, and civil society organizations. Not only this, all these resources are adapted for platform use should also be made transparent. For example, where certain resources are applied by platforms, it should be clear what was applied and why. And in cases where feedback is sought but not utilized, it should also be clear as to why. Now, the second part of the question, which I’m going to rush through quickly because of time, is what platforms should learn from human rights experts. Number one is practical application of human rights standards. And I know this is a very, very tiny and difficult area, especially for companies. But since human rights experts draw from human rights standards in the analysis of platform activities, it will be useful to look at the most proximate standard. And for example, in this context, the UN guiding principles on business and human rights would easily apply. And You know, the UNGPs, especially as related to its application to technology companies, provide useful ways for companies to ensure that their activities are rights-centered. For example, one of such ways that the UN Human Rights has done this is through the BTEC project, which focuses on the application of the UNGPs to digital technologies, and they have quite a lot of resources in this area. And the BTEC’s focus, you know, is, they have four strategic focus. Number one is addressing human rights risk in business models, human rights due diligence and end-use, accountability and remedy, and a smart mix of measures, which involves exploring regulatory and policy responses to human rights challenges linked to digital technologies. Another way platforms should learn from human rights experts include ensuring participatory development of content moderation rules and processes, and I was happy to listen to Connor earlier, because this is a very practical demonstration of what this participatory development refers to. Thirdly, is also proactive accountability. This helps to engender trust, and it involves operationalizing measures that make platforms accountable regarding human rights harms, even before victims or general public are aware of such harms. This includes, but is not limited to proactive human rights impact assessment of products and services to identify harms, communication of the access to which such harms impact human rights, and the steps taken to remedy such harms. Lastly, is platforms should learn from human rights experts how to agile and dynamic adaptivity, or what do I mean by that? What I mean is, platforms can also learn to be agile and adaptive when it comes to applying international human rights standards to emerging and cutting-edge content moderation challenges. For example, what should be the best standard practice that already been highlighted by human rights experts regarding imported content moderation label? Another example is in what ways can platforms fund objective institutional or support objective institutional research without impeding their independence of credibility? So, in my view, I think these are quite, of course, a brushed presentation, but this is more or less some of the ways that I think platforms can and should learn from human rights experts to ensure a rights-centered model for content moderation. Thank you very much, Amela.

Tomiwa Ilori: Thank you so much, Tomiwa, for that. And I think you raise an important point regarding, I think, platforms being very transparent about the input that they take into consideration and why, and not just communicate the outcome. I’m not sure if Naomi is online. I can’t see from here. Naomi, if you’re online, would you like to jump in? Naomi Schiffman is from the Oversight Board and if she’s online, she will discuss how the Oversight Board contributes to policy development and she will also highlight how she built the Academic and Research Partnerships Program at Found Temple. Is she online? No. Okay. I think if she’s not online, I think we can move into the discussion phase for this. We have a few more minutes. But before I ask questions, we’ve been talking for the last… you know, a few minutes. Are there any questions for our experts? Yeah. Please introduce yourself and, yeah.

Mike Alton: Hi everyone, it’s Mike Alton from the UN Refugee Agency and we’ve worked with a number of people on the call, so good to see you. It’s about capacity and I love the kind of approach from ground up, but I just wonder how much capacity there is both on MetaSite and any other content platforms in really putting that resource where it’s needed and the issue of language comes up again and again in terms of capacity to support maybe a wide breadth of languages unsupported now. So how can we take that bottom up approach, not just for policy development, but also for content moderation and make sure that we have a really strong infrastructure there? I know lots of people are putting AI at the heart of, maybe this can help us moderate content going forward and that might be one possibility, but the doubts are there. So yes, what can we do and is there enough capacity and if not, how can we increase that capacity?

Tomiwa Ilori: Thank you so much, Mike. That’s a great question. I think before we get back, any other question? Yes, please introduce yourself and thank you.

AUDIENCE: Thank you so much. This is Adnan from Iraq. Thank you so much for this interesting discussion and I actually learned from all of you. On last year, I participated in one of the Meta’s event on community standards in Amman. It was actually helpful. I had the same similar question actually because I’m from Iraq and I know that Iraq is a very diverse country and my question would be for Connor regarding other languages, how you guys are, because I know that those kind, the policies you mentioned, maybe most of them are. are in English. I don’t know whether they are available in different languages, so people can read about it. And also, the next question will be about the engagement of stakeholders at the local level. I know that, for example, Iraq, I feel like there is a lot of difficulty to reach to META when someone, a researcher or an NGO, wants to engage, to ask question or raise a question, raise an issue. It’s really difficult to get to the experts. Thank you so much.

Ghandi Emilar: Thank you so much. And so good to see someone who attended our community summit in here. Connor, I will, can you take on some of the, you know, parts of the question and I’m happy to jump in.

Connor Sanchez: Yeah, yeah, just really quick. Thanks for the questions. I think language is a huge, huge part of content moderation and our enforcement. It’s obviously something that we have, we’ve invested quite a bit over the last eight years or so. I think just overall, just zooming out, we’ve invested $20 billion in safety and security. And so our trust and safety team at the company is made up of about 40,000 different people who bring language expertise, but also bring, you know, expertise in certain policy violation areas, in certain areas such as safety and cybersecurity. Content moderation includes thousands of reviewers that moderate our content across our platform. So Facebook, Instagram threads in about 70 different languages. We also do fact checking. with our third party fact checkers for misinformation in about 60 different languages. And so, and then for our coordinated inauthentic behavior, which focuses on what many would consider foreign influence operations, these are looking at taking down networks of operations. And those have been done in about 42 different languages. So it’s something that we are continually wanting to get better at. And I think in addition to just the language differences, there are the cultural differences and the colloquial nuances that come with every language. And so something even like Spanish, you have certain terms and ways of speaking that differ from one part from Central America to South America. And for that, another part of our content moderation apparatus that’s helpful is our Trusted Partner Program. Our Trusted Partner Program is a network of hundreds of NGOs around the world that we manage that really provide that local context, that local insight when there is maybe a particular trend or a term that we may not be, that may be only used in that jurisdiction or in that region, then they can be informative for our policies as we’re developing something or taking action on particular pieces of content. But MLR, anything else that I may have missed on that?

Ghandi Emilar: I think you have spoken about a lot of things there which are really, really relevant. Just to add, I think on some of the questions that you, you know, Mike, right? You asked around, you know, capacity on both sides. I think Connor has mentioned that we have 40,000, over 40,000 people in trust and safety. But I think you can never be at a point where you’re like, we have full capacity, like we know everything. Cultural competence is very important for us to understand, but also I wanted to mention one other thing, where for us, when we engage externally, it’s also important to note that, you know, we have some external stakeholders or experts or people with lived experience, one who are willing and able to engage with us, some who are willing and unable to engage with us. And they’re unable because maybe connecting to date, you know, like internet connection is expensive or language capacity. While we have some local, you know, team members, some people who can speak the languages, but sometimes we also ensure that we either meet people where they are at, where we can, but also support, you know, connectivity, like support to engage as well. But, you know, we know sometimes when it’s once off, it doesn’t, we need to sustain it and make sure that it’s something that we are continuously be, you know, able to do. So we also look at the format of the engagement itself. In capacity for us, we need to continuously, I think, look at the context and know, you know, where we have gaps and also rely on our external experts to say, ah, that was it, you know, you could have done better in this. And we also learn a lot, not only from academics like Jeff or NGOs, but also from humanitarian organizations, because you are on the ground, you know what’s happening and you deal with people every day. And talking of sustaining these engagements, I just want to come back to you, Jeff. How can we sustain engagements with academics? Because once off really doesn’t, it’s not as meaningful as we want it to be. How can we ensure that?

Jeffrey Howard: it’s continuous. Well, I think relationships are key to the story here and making sure that there’s ongoing dialogue with the stakeholders over time. My experience participating in groups within Meta who have periodic meetings where they revisit policy areas over time is extremely useful. And of course, as those relationships develop, they are reciprocated. So I’ve been delighted to have lots of people from Meta and Oversight Board participating in events at my university. And so I think investing in those relationships are absolutely crucial here. I do have a question for Connor if I can throw it in. Connor, can I take you back to your point about content soliciting smuggling? And you talked about the fact that a lot of on-the-ground stakeholders with expertise of this issue counseled against banning that content. But in the end, you took the judgment that you should remove content soliciting smuggling.

AUDIENCE: How do I get out of Libya? How do I get to Italy, for example?

Jeffrey Howard: But you have that information page of trusted third party information. Can you talk us through how you made the decision not to defer to those on the ground who were saying, leave this content up? What was that experience like? Because it does seem to me like the right judgment, but of course, it went against what some people thought you should do. So I wondered how you made that decision.

Connor Sanchez: Yeah, that’s a great question. I mean, this was an area where I can’t say that it was neatly divided in terms of where people felt that we should go with this policy. I think everybody, first and foremost, that we spoke with recognized that this is a very, very difficult call and that there are. But I think that the picture that they painted for us was that. People who are on the move are receiving information from a wide variety of sources, and they’re making decisions on a thousand different factors. So, yes, they’re online, but they’re also in person, and they’re also in migrant shelters. They’re also speaking with relatives in their hometown before they maybe start on their journey. And they’re making these decisions on a wide variety of different information points that they receive. So I think that the thing that they wanted us to really hone in on was to think about some universal human rights standards as we approach this in terms of proportionality. We aren’t the first entity to sort of think about these challenges. There have been consultative processes in the past that we could take advantage of. And I think this comes back to Tomiwa’s point, which is the way in which we can kind of learn from international human rights legal frameworks. The protocol on human smuggling was something that we were urged to take a look at, and sort of that documents differentiation between human trafficking and human smuggling, making sure that we understood those two definitions. And then I think from our standpoint, we began to think, okay, this doesn’t necessarily need to be, we don’t need to make those distinctions necessarily on a binary decision of remove or keep up. We could still remove those and still allow for, some understanding of those who may be posting this and providing, you know, information through a safety page. So I think it’s once we kind of, that idea of providing a safety page, that meant something that we could introduce that would reduce the risk of removing. And once we went to stakeholders with that, that as an option, that was something that many of them, even the ones who were originally saying, leave up, leave up, they were at least very, very warm to the idea of at least you can provide this safety page that would serve to reduce the risk of just removing it.

Ghandi Emilar: So much, Connor. I think we only have like a minute or so. Do you want to give like closing remarks, just a minute?

AUDIENCE: Well, I think Connor, just in his wonderfully detailed answer, which gave us a real sense of how the process works, illuminated a really crucial feature of it, which is that it’s often in these policy areas, an iterative process where you go back to stakeholders with updates and they might themselves change their minds because people’s views on the topics under discussion are often not fixed and that they are the result of ongoing deliberation. And so I think one of the things that we’re taking out of this panel is the importance of having ongoing conversations like these to improve our discussions about these topics. And I’m ever so grateful to everyone for coming and for being involved in this discussion.

Ghandi Emilar: Thank you so much. I’m not sure if Tomiwa is still there. Do you want to give your closing remarks as well? Just a minute.

Tomiwa Ilori: Yes. Thank you very much, Emila. Yes. It’s a pleasure to have been here and also listening to others and the questions being asked. And I think that such conversations like this will continue to happen and that we continue to put in the work, because like you also said, Emila, I don’t think there will ever be a time where we would come to the point to say, okay, we’ve done everything that could be done regarding content moderation because, you know, issues will always crop up that needs, you know, diversified and, you know, multi-stakeholder contributions. So it’s a pleasure to be here and thank you very much. Yeah, till some other time.

Ghandi Emilar: Yeah, thank you so much to everyone who’s in the room and everyone else who joined us online as well. I know Professor Howard is still around. So for those who still want to engage with him on site, please do. And Connor, Tomiwa, thank you so much for participating in this. Bye for now. Thanks, everybody.

J

Jeffrey Howard

Speech speed

180 words per minute

Speech length

1805 words

Speech time

599 seconds

Educative role rather than activist role

Explanation

Academics should adopt an educative approach when engaging with platforms, rather than an activist one. The goal should be to inform and empower platforms with frameworks and insights, not to push a particular viewpoint.

Evidence

Comparison to teaching methods in academic classes, where professors present various perspectives rather than just their own.

Major Discussion Point

Role of academics in platform policy development

Differed with

Tomiwa Ilori

Differed on

Role of academics in platform policy development

Providing frameworks and insights rather than pushing preferred views

Explanation

Academics should focus on supplying platforms with insights and frameworks to make informed decisions. The role is not to tell platforms what to do, but to equip them with the tools to make their own judgments.

Evidence

Example of presenting philosophical theories about threats and legal philosophy concepts to Meta for their policy on violent threats.

Major Discussion Point

Role of academics in platform policy development

Importance of judgment by platforms themselves

Explanation

Platforms must ultimately make their own judgments on ethical questions. While academics can provide insights, the final decision and responsibility lie with the platform.

Evidence

Analogy to personal moral decisions, where relying solely on others’ opinions would be intellectually and morally unserious.

Major Discussion Point

Role of academics in platform policy development

Engaging with platforms is intellectually generative for academics

Explanation

Academics should engage with platforms as it can be intellectually stimulating and help identify new research topics. It also provides an opportunity to make a practical difference through academic work.

Major Discussion Point

Role of academics in platform policy development

Agreed with

Conor Sanchez

Ghandi Emilar

Agreed on

Ongoing and sustained engagement

C

Conor Sanchez

Speech speed

130 words per minute

Speech length

2576 words

Speech time

1181 seconds

Consulting wide range of experts on policy development

Explanation

Meta engages with a diverse group of experts when developing policies. This includes academics, NGOs, human rights experts, and individuals with lived experiences relevant to the policy area.

Evidence

Examples of consulting experts for crisis policy protocol and functional identification process.

Major Discussion Point

Meta’s stakeholder engagement process

Agreed with

Jeffrey Howard

Tomiwa Ilori

Agreed on

Importance of diverse stakeholder engagement

Inclusive, expertise-based, and transparent engagement principles

Explanation

Meta’s stakeholder engagement process is guided by principles of inclusivity, expertise, and transparency. They aim to include a wide range of perspectives and build trust with stakeholders.

Major Discussion Point

Meta’s stakeholder engagement process

Case studies of expert input impacting policies

Explanation

Meta provided examples of how expert consultations have directly influenced policy decisions. This demonstrates the practical impact of stakeholder engagement on platform governance.

Evidence

Examples of crisis policy protocol, functional identification process, and human smuggling content policy.

Major Discussion Point

Meta’s stakeholder engagement process

Agreed with

Jeffrey Howard

Ghandi Emilar

Agreed on

Ongoing and sustained engagement

Balancing different stakeholder perspectives in decision-making

Explanation

Meta considers various stakeholder perspectives when making policy decisions. They aim to balance different viewpoints and potential risks in their final policy choices.

Evidence

Example of decision-making process for policy on human smuggling content.

Major Discussion Point

Meta’s stakeholder engagement process

Language and cultural differences in moderation

Explanation

Content moderation faces challenges due to language and cultural differences. Platforms need to consider not just language translation but also cultural nuances and colloquial expressions.

Evidence

Meta’s content moderation in 70 languages, fact-checking in 60 languages, and addressing coordinated inauthentic behavior in 42 languages.

Major Discussion Point

Challenges in content moderation

T

Tomiwa Ilori

Speech speed

139 words per minute

Speech length

322 words

Speech time

138 seconds

Meaningful collaboration with credible human rights institutions

Explanation

Platforms should collaborate with established and credible human rights institutions to identify human rights issues. This involves working directly with organizations that have local expertise and presence.

Major Discussion Point

Human rights-centered approach to content moderation

Agreed with

Jeffrey Howard

Conor Sanchez

Agreed on

Importance of diverse stakeholder engagement

Centering victims and lived experiences

Explanation

Platforms should focus on including the voices of victims and those with lived experiences in their policy development process. This ensures that the impact on vulnerable persons is considered.

Major Discussion Point

Human rights-centered approach to content moderation

Differed with

Jeffrey Howard

Differed on

Role of academics in platform policy development

Bottom-up content governance approach

Explanation

Platforms should adopt a bottom-up approach to content governance by working with key actors and experts in specific domestic contexts. This provides more contextual nuances and understanding of local issues.

Major Discussion Point

Human rights-centered approach to content moderation

Agreed with

Jeffrey Howard

Conor Sanchez

Agreed on

Importance of diverse stakeholder engagement

Increasing access to platform data for independent research

Explanation

Platforms should provide more access to their data for independent researchers, especially in underserved contexts. This allows for better understanding of how platforms shape critical aspects of human rights challenges.

Major Discussion Point

Human rights-centered approach to content moderation

M

Mike Walton

Speech speed

189 words per minute

Speech length

170 words

Speech time

53 seconds

Capacity and resource constraints

Explanation

There are concerns about the capacity of platforms to implement bottom-up approaches and support a wide range of languages in content moderation. This raises questions about resource allocation and infrastructure.

Major Discussion Point

Challenges in content moderation

U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Difficulty accessing platforms for local stakeholders

Explanation

Local stakeholders, such as researchers or NGOs, often face challenges in reaching out to platforms like Meta to engage, ask questions, or raise issues. This difficulty in access can hinder effective local engagement.

Evidence

Example from Iraq where it’s difficult for local stakeholders to reach Meta experts.

Major Discussion Point

Challenges in content moderation

Ghandi Emilar

Speech speed

154 words per minute

Speech length

1347 words

Speech time

521 seconds

Sustaining ongoing engagement with academics and experts

Explanation

There is a need to sustain continuous engagement with academics and experts, rather than relying on one-off interactions. This ongoing dialogue is crucial for meaningful policy development and improvement.

Major Discussion Point

Challenges in content moderation

Agreed with

Jeffrey Howard

Conor Sanchez

Agreed on

Ongoing and sustained engagement

Agreements

Agreement Points

Importance of diverse stakeholder engagement

Jeffrey Howard

Conor Sanchez

Tomiwa Ilori

Consulting wide range of experts on policy development

Meaningful collaboration with credible human rights institutions

Bottom-up content governance approach

All speakers emphasized the importance of engaging with a diverse range of stakeholders, including academics, NGOs, human rights experts, and individuals with lived experiences, to inform platform policy development.

Ongoing and sustained engagement

Jeffrey Howard

Conor Sanchez

Ghandi Emilar

Engaging with platforms is intellectually generative for academics

Case studies of expert input impacting policies

Sustaining ongoing engagement with academics and experts

Speakers agreed on the need for continuous, sustained engagement between platforms and external experts to ensure meaningful policy development and improvement.

Similar Viewpoints

Both speakers emphasized the importance of considering multiple perspectives and frameworks in policy development, rather than pushing for a single preferred view.

Jeffrey Howard

Conor Sanchez

Providing frameworks and insights rather than pushing preferred views

Balancing different stakeholder perspectives in decision-making

Both speakers highlighted the importance of inclusivity and considering the perspectives of those directly affected by platform policies.

Conor Sanchez

Tomiwa Ilori

Inclusive, expertise-based, and transparent engagement principles

Centering victims and lived experiences

Unexpected Consensus

Challenges in content moderation across languages and cultures

Conor Sanchez

Mike Alton

Unknown speaker

Language and cultural differences in moderation

Capacity and resource constraints

Difficulty accessing platforms for local stakeholders

There was an unexpected consensus on the significant challenges faced in content moderation across different languages and cultures, including resource constraints and difficulties in local engagement. This highlights a shared recognition of the complexity of global content moderation.

Overall Assessment

Summary

The main areas of agreement centered around the importance of diverse stakeholder engagement, the need for sustained and ongoing collaboration between platforms and experts, and the recognition of challenges in global content moderation.

Consensus level

There was a moderate to high level of consensus among the speakers on the fundamental principles of stakeholder engagement and policy development. This consensus suggests a shared understanding of the complexities involved in platform governance and the importance of collaborative approaches. However, the discussion also revealed ongoing challenges, particularly in implementing these principles across diverse global contexts, which may require further exploration and innovative solutions.

Differences

Different Viewpoints

Role of academics in platform policy development

Jeffrey Howard

Tomiwa Ilori

Educative role rather than activist role

Centering victims and lived experiences

Jeffrey Howard argues for an educative approach where academics provide frameworks and insights, while Tomiwa Ilori emphasizes the importance of including victims’ voices and lived experiences in policy development.

Unexpected Differences

Transparency in stakeholder engagement

Conor Sanchez

Tomiwa Ilori

Inclusive, expertise-based, and transparent engagement principles

Increasing access to platform data for independent research

While both speakers discuss transparency, there’s an unexpected difference in their approach. Connor emphasizes Meta’s existing transparency in engagement, while Tomiwa calls for increased access to platform data for independent researchers, suggesting a gap in current transparency practices.

Overall Assessment

summary

The main areas of disagreement revolve around the role of academics in policy development, the extent of stakeholder engagement, and the level of transparency in platform operations.

difference_level

The level of disagreement is moderate. While there are some fundamental differences in approach, particularly between academic and platform perspectives, there is also significant common ground in recognizing the importance of expert input and stakeholder engagement. These differences highlight the complexity of developing content moderation policies that balance various stakeholder interests and human rights principles.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of engaging diverse stakeholders, but Conor focuses on Meta’s existing process of consulting various experts, while Tomiwa advocates for a more bottom-up approach that emphasizes local context and actors.

Conor Sanchez

Tomiwa Ilori

Consulting wide range of experts on policy development

Bottom-up content governance approach

Similar Viewpoints

Both speakers emphasized the importance of considering multiple perspectives and frameworks in policy development, rather than pushing for a single preferred view.

Jeffrey Howard

Conor Sanchez

Providing frameworks and insights rather than pushing preferred views

Balancing different stakeholder perspectives in decision-making

Both speakers highlighted the importance of inclusivity and considering the perspectives of those directly affected by platform policies.

Conor Sanchez

Tomiwa Ilori

Inclusive, expertise-based, and transparent engagement principles

Centering victims and lived experiences

Takeaways

Key Takeaways

Academics should play an educative rather than activist role in platform policy development, providing frameworks and insights rather than pushing preferred views

Meta engages a wide range of stakeholders and experts in its policy development process, aiming for inclusivity, expertise, and transparency

A human rights-centered approach to content moderation should involve meaningful collaboration with credible institutions, centering victims’ experiences, and adopting bottom-up governance

Content moderation faces significant challenges related to language, cultural differences, capacity constraints, and sustaining ongoing engagement with experts

Resolutions and Action Items

Meta to continue engaging diverse stakeholders and experts in policy development

Platforms to increase access to data for independent researchers, especially in underserved contexts

Meta to expand language capabilities for content moderation and fact-checking

Unresolved Issues

How to effectively scale bottom-up content governance approaches

How to balance conflicting stakeholder perspectives in policy decisions

How to improve accessibility for local stakeholders to engage with platforms

How to sustain long-term engagement with academics and experts

Suggested Compromises

Removing content soliciting human smuggling services while providing a safety page with immigration information

Balancing removal of potentially harmful content with providing alternative resources or information

Thought Provoking Comments

I think we should reject the activist conception of the role of academics and stakeholder engagement, and I think we should reject it because it diminishes the distinctive role that I think academics can play in this process because it eliminates the distinction between the role academics can play and the role that other stakeholders can play.

speaker

Jeffrey Howard

reason

This comment challenges the common view of how academics should engage with platforms and proposes a different model focused on education rather than advocacy.

impact

It shifted the discussion to focus on the unique role academics can play in providing frameworks and insights rather than just pushing for specific policies. This led to further exploration of how platforms can best utilize academic expertise.

Our role isn’t just to tell platforms what we think the right answer is as we see it, as if platforms were counting votes among stakeholders. And by the way, even if platforms were counting votes among stakeholders, it’s not clear academics should get a vote since we’re not really stakeholders, we’re not particularly affected by policies in the way particular constituents are.

speaker

Jeffrey Howard

reason

This insight reframes the role of academics from advocates to educators, highlighting that their value comes from knowledge rather than representing a constituency.

impact

It prompted reflection on how platforms should weigh different types of input and expertise in their decision-making processes. It also set up the later discussion of how Meta actually incorporates academic and expert input.

What I mean by that is working with key actors and experts in specific domestic contexts, such as national human rights institutions, civil society, and academics. This provides more contextual nuances and understanding of the issues on the ground, how these actors are currently thinking about them, and how exactly platforms can learn from their impact on the ground.

speaker

Tomiwa Ilori

reason

This comment emphasizes the importance of local context and on-the-ground expertise, which adds nuance to the earlier discussion of academic input.

impact

It broadened the conversation beyond just academic input to consider a wider range of stakeholders and expertise. This led to further discussion of how Meta engages with diverse stakeholders globally.

Can you talk us through how you made the decision not to defer to those on the ground who were saying, leave this content up? What was that experience like? Because it does seem to me like the right judgment, but of course, it went against what some people thought you should do.

speaker

Jeffrey Howard

reason

This question probes into the actual decision-making process at Meta, moving the discussion from theoretical to practical considerations.

impact

It prompted a detailed explanation from Connor about how Meta balances different expert opinions and stakeholder input in making policy decisions. This provided concrete insight into Meta’s policy development process.

Overall Assessment

These key comments shaped the discussion by moving it from theoretical considerations of academic engagement to practical exploration of how platforms like Meta actually incorporate diverse expert and stakeholder input. The conversation evolved from defining ideal roles for academics to examining the complexities of balancing different perspectives and local contexts in global policy decisions. This progression provided a more nuanced and realistic picture of the challenges and processes involved in platform governance and content moderation policy development.

Follow-up Questions

How can platforms increase capacity to support content moderation in a wide breadth of languages?

speaker

Mike Walton

explanation

This is important to ensure effective content moderation across diverse linguistic contexts and to implement a bottom-up approach to policy development and enforcement.

How can AI be effectively used to help moderate content across languages?

speaker

Mike Walton

explanation

This explores potential technological solutions to the language capacity issue in content moderation, while acknowledging existing doubts about AI’s effectiveness.

How can Meta improve engagement with stakeholders at the local level, particularly in countries like Iraq?

speaker

Adnan

explanation

This addresses the difficulty some local researchers and NGOs face in reaching Meta experts to engage on issues or ask questions, which is crucial for effective local stakeholder engagement.

How can platforms sustain meaningful engagements with academics over time?

speaker

Ghandi Emilar

explanation

This is important to ensure that academic input into platform policies is continuous and not just a one-off engagement, leading to more robust and informed policy development.

How can platforms better identify and work directly with institutions that have ‘boots on the ground’ regarding content moderation in specific contexts?

speaker

Tomiwa Ilori

explanation

This is crucial for ensuring that content moderation policies are informed by local expertise and context-specific knowledge.

How can platforms increase access to platform data for independent research, especially in underserved contexts such as the majority world?

speaker

Tomiwa Ilori

explanation

This is important for enabling more comprehensive research on how platforms shape critical aspects of human rights challenges in diverse global contexts.

What should be the best standard practice regarding imported content moderation labels?

speaker

Tomiwa Ilori

explanation

This area requires further research to establish effective practices for content moderation across different cultural and linguistic contexts.

In what ways can platforms fund or support objective institutional research without impeding their independence or credibility?

speaker

Tomiwa Ilori

explanation

This is important for ensuring that research on platform policies and impacts remains independent and credible while still benefiting from platform support and data access.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #60 Safe Digital Space for Children

Open Forum #60 Safe Digital Space for Children

Session at a Glance

Summary

This panel discussion focused on protecting children in digital spaces, addressing the challenges and potential solutions for ensuring online safety. The conversation brought together experts from various sectors, including technology companies, international organizations, and cybersecurity firms.

The discussion highlighted that children are digital natives, often more adept at navigating online spaces than adults. Panelists emphasized the need for evolving protection measures that consider children’s changing needs as they grow and as digital experiences evolve. Key challenges identified included cyberbullying, inappropriate content, privacy concerns, and addiction to social media and games.

Experts stressed the importance of a multi-stakeholder approach involving parents, educators, governments, and technology companies. They discussed the role of AI and human moderation in content filtering, the implementation of parental controls, and the need for age-appropriate default settings on platforms. The importance of empowering children to self-govern their safety was also highlighted, with suggestions for creating digital advocates in schools.

The panel addressed the need for updated laws and policies to keep pace with rapidly evolving technologies. They also discussed the importance of equipping law enforcement and social services with the necessary resources and expertise to address online threats to children. The role of education in fostering digital literacy and challenging harmful social norms was emphasized.

Panelists agreed that while progress has been made, protecting children online remains an ongoing challenge that requires continuous adaptation and collaboration across sectors. The discussion concluded with a call for a holistic approach that balances children’s rights to privacy and protection while empowering them to navigate the digital world safely.

Keypoints

Major discussion points:

– The need for a holistic approach to protecting children online, involving governments, tech companies, parents, educators, and children themselves

– The importance of designing digital products and services with child safety in mind from the start

– The challenges of keeping laws and policies up to date with rapidly evolving technology

– The role of education in empowering children to navigate online spaces safely

– The balance between protecting children and respecting their privacy/autonomy online

Overall purpose:

The goal of this discussion was to explore various approaches and perspectives on protecting children’s safety and wellbeing in online environments, with input from policymakers, tech companies, international organizations, and youth.

Tone:

The tone was largely collaborative and solution-oriented, with panelists acknowledging the complexity of the issue and the need for multi-stakeholder cooperation. There was a sense of urgency but also optimism about finding ways to better protect children online. The tone became more interactive towards the end with audience questions, bringing in additional perspectives.

Speakers

– Ahmad Bhinder: Policy Innovation Director at the Digital Cooperation Organization

– Philippe Nahhas: Partner at a technology policy firm, Moderator

– Haitham Al Jowhari: Partner of Cybersecurity at PwC

– Mo Isap: Founder and CEO of iN4 Group

– Shivnath Thukral: Director and Head of Public Policy of Meta in India

– Afrooz Johnson: Global Lead for preventing and responding to online child abuse and exploitation at UNICEF

Additional speakers:

– Jutta Croll: German Digital Opportunities Foundation, Children’s rights advocate

– Unnamed high school student from Massachusetts

Full session report

Protecting Children in Digital Spaces: A Comprehensive Approach

This panel discussion brought together experts from various sectors to address the critical issue of protecting children in digital spaces. The conversation highlighted the complexities of ensuring online safety for young users and explored potential solutions involving multiple stakeholders.

Key Challenges and Context

Ahmad Bhinder, Policy Innovation Director at the Digital Cooperation Organization (DCO), an intergovernmental organization focused on digital economy advancement, emphasized that today’s youth are “digital natives” who face numerous risks in the digital world. These risks include cyberbullying, addiction, inappropriate content, and privacy concerns. Shivnath Thukral, Director and Head of Public Policy of Meta in India, noted that bad actors are present in both real and virtual worlds, underscoring the need for vigilance.

Afrooz Johnson, Global Lead for preventing and responding to online child abuse and exploitation at UNICEF, highlighted that laws have not kept pace with rapidly evolving digital technologies. This lag in legislation creates gaps in protection and enforcement. Johnson also pointed out that social services and law enforcement often lack the resources and expertise to address online challenges effectively.

Multi-stakeholder Approach

There was a strong consensus among the speakers on the need for a holistic, multi-stakeholder approach to protecting children online. This approach involves governments, tech companies, parents, educators, and children themselves.

Ahmad Bhinder advocated for the development of national children’s online protection strategies and mentioned the Digital Space Accelerators program as an initiative to support this goal. Haitham Al Jowhari, Partner of Cybersecurity at PwC, stressed that the private sector should invest in research and development, work closely with law enforcement and regulators, and collaborate with academia to develop innovative solutions.

Empowerment, Education, and Involvement

A recurring theme in the discussion was the importance of empowering and educating children about online safety, as well as involving them in the design of safety measures. Mo Isap, Founder and CEO of iN4 Group, suggested creating safe spaces for children to be empowered and experiment online. He also proposed identifying digital advocates among students to support their peers.

Ahmad Bhinder shared insights from his daughter’s perspective on online safety, highlighting the importance of considering children’s views when developing protective measures. The panel agreed that involving children in the design of safety features and policies is crucial for creating effective and relevant solutions.

Balancing Protection and Privacy

Shivnath Thukral highlighted Meta’s approach of implementing a framework of preventing, controlling, and responding to online threats. He emphasized the importance of implementing safety features by default on social media platforms while also respecting children’s privacy. Thukral also mentioned Meta’s collaboration with NECMEC on sharing CSAM (Child Sexual Abuse Material) data to combat online exploitation.

An interesting point of consensus emerged between Thukral and an audience member regarding the need to balance child protection measures with respecting children’s privacy rights. This highlights the complexity of implementing safety measures without infringing on children’s rights and autonomy.

Evolving Education and Technology

An audience member, a high school student from Massachusetts, raised the important point that digital safety education needs to continuously evolve to keep pace with rapidly changing technologies. This observation underscores the ongoing challenge of ensuring that protective measures and educational approaches remain relevant and effective in a fast-changing digital landscape.

Socio-economic Considerations

Mo Isap emphasized the importance of considering socio-economic contexts when developing online safety policies. This nuanced approach recognizes that children from different backgrounds may face varying levels of vulnerability to online risks and may require tailored support and protection measures.

Unresolved Issues and Future Directions

Despite the productive discussion, several issues remained unresolved. These include finding effective ways to balance children’s right to protection with their right to privacy online, addressing online safety challenges for children from different socio-economic backgrounds, and combating the problem of children becoming online attackers themselves despite safety education.

The panel suggested several action items, including:

1. Implementing safety features by default in products used by children

2. Updating laws to adequately criminalize online violence against children

3. Requiring child rights impact assessments for tech companies

4. Developing national children’s online protection strategies

5. Strengthening social services to support at-risk children

Conclusion

The discussion highlighted that protecting children online remains an ongoing challenge that requires continuous adaptation and collaboration across sectors. While progress has been made, the rapidly evolving nature of digital technologies necessitates a flexible and proactive approach. By involving children in the design of safety measures, empowering them to navigate online spaces safely, and fostering cooperation between various stakeholders, we can work towards creating a safer digital environment for young users.

As Philippe Nahhas, the moderator, aptly summarized, “We overprotect children in the world, and we underprotect them in the virtual world.” This paradox serves as a call to action for all stakeholders to redouble their efforts in ensuring children’s safety in digital spaces while respecting their rights and autonomy.

Session Transcript

Ahmad Bhinder: system. Can you hear me through your headphones well? Perfect. Sorry, we’ll start in a minute. We’re just sorting out some technicalities. It’s channel 3. It’s channel 3 to hear the audio for the speakers. All right. So the tech is all sorted, hopefully. My name is Ahmed Binder. I am a policy innovation director at the Digital Cooperation Organization. And we are gathered here to discuss the very critical topic of protecting children online or in the digital spaces. I see some problems with people not being able to hear me. My audio is on channel 3. May I confirm if people can hear me? OK, excellent. So we are here to discuss a very, very critical topic that is very relevant to the digital economy and the evolving digital landscape. And for us, today we have a very senior, diverse, and expert panel of speakers. Unfortunately, our moderator is on his way. He’s stuck somewhere, but I will try to fill in for him. His name is Philip Nahas, and he’s a partner at a technology policy firm. So we have with us today Mr. Haitham Aljohri. He is a partner of cybersecurity at PwC. May I ask Mr. Aljohri to introduce himself, and then we move on with the other panelists? Good afternoon, ladies and gentlemen.

Haitham Aljabry: I’m Haitham Aljohri. I’m a partner in PwC Middle East based out of Dubai. I do cybersecurity for a living. I’ve been with the firm for 20 years. I help government entities. My focus on government entities, critical national infrastructure, working on their cyber agenda. And we’re glad to be here. Thank you.

Ahmad Bhinder: Thank you, Mr. Aljohri. Now we have, next we have with us Mr. Mohamed Issaab. And he’s a founder and CEO of iN4 Group that is headquartered in the media city in Salford.

Mohamed Isap: So Mohamed, if you could please introduce yourself. Hi. Good afternoon, everyone. Can you hear me? Sorry. That’s better. Good afternoon. I’m Mohamed Issaab. I’m the CEO for iN4 Group. We’re one of the leading advanced tech training providers in the UK, working extensively across data, cyber, cloud, and software. As part of our business group, we also deliver the government’s Cyber First Program, developed by TCHQ, into schools and colleges. And I’m also a founder of Star Academies,

Ahmad Bhinder: where we have 35 schools and 25,000 young people in education across the United Kingdom as well. Thank you, Mohamed. Next we have with us is Mr. Shivanath Tukral. And he’s director and head of public policy of Meta in India. Mr. Shivanath, could you please introduce yourself? My name is Shivanath. I lead public in India. Can you hear me? Hi. Yeah, better.

Shivnath Thukral: Yeah, my name is Shivanath. I head public policy in India. I’ve been in Meta for seven years in India. I was before that. And my responsibility would be interaction with all government regulatory agencies and make sure that it continues to take the right measures in a country like India, which tends to be one of the largest user base for Meta across the world, across Facebook, Instagram, and WhatsApp. Thank you, Mr. Shivanath. And we have an online speaker with us all the way from the US.

Ahmad Bhinder: Her name is Afrooz Johnson, and working in the UNICEF. So Afrooz, can you hear us? And could you please introduce yourself?

Afrooz Johnson: Yes, I can hear you loud and clear. Good morning from a very cold New York City. I’m sorry I’m not there, but glad to be joining online. So I’m Afrooz Kavyani Johnson. I work at UNICEF, which I’m sure most of you are aware is the United Nations Specialized Agency for Children. I’m a global lead in our work to prevent and respond to online child abuse and exploitation. UNICEF works in over 190 countries and territories. And I support our teams around the world on tackling this issue. So this includes research, work on legislative and policy reform, training for frontline law enforcement, and social services, educative efforts with children and their families, and various collaborations and engagement with industry. Thank you.

Ahmad Bhinder: Thank you, Afrooz, so much. And OK, so let me start by saying a few words. I will not take much of your time. And then we would really go ahead and listen from all the experts panelists from here. So as I said, my name is Ahmed Binder. I am a policy innovation director at the Digital Cooperation Organization. You can see on your screens, we are an intergovernmental organization that covers the digital economy. And we instill cooperation amongst our 16 member states that span from the Middle East, Africa, subcontinent, to Europe. And we do a lot of initiatives around, for example, around the sustainable growth of digital economy, governance in the digital space, AI, digital rights, et cetera. We have a growing list of observers. And the observers are from the academia, from civil society, from private companies as well. So as you could see on the screen, we are 40 plus, and we are rapidly growing. We play four roles for, as I said, we are an intergovernmental organization. So we are represented by the ministers of digital economy and ICT of our member states. And we play four roles for our member states. We advise them on the best practice policies. We facilitate the cooperation on digital economy. We advocate for the best practice policies, and we provide information. So we have published a few indices to measure the digital economy. But the details could be found on the website. So today, I am here to talk about the safe space for children. Children are digital natives. And actually, I was having a chat with my kids, who are 13 and 10, last night. And I was talking to them. I was telling them that I’m going at a session for online safety for children. So could you tell me what are the issues that you face online? And they said, what do you mean? What do you mean online? Or what do you mean issues that we face? And it took me a while to bring it to them or to discuss with them that there is a world around the digital world for them. So for them, it’s all about digital. And that is why, when we think about them being digital natives, they live in the digital world. So the first question, when we are addressing the issues around the children’s online protection, are then is, how do we define a child? Who qualifies to be a child? Generally, a child becomes an adult when the child turns 18. But there’s no magic switch that flips, and you qualify from being a child to an adult. So when we are thinking of online safety, and we are thinking of creating the safe digital experiences, we need to remember that the child needs the change as they grow. And as they grow does not really only include as they grow from, for example, from toddler to a preteen, et cetera. But as the digital experiences evolve as well. And therefore, the discussion is ongoing, and the discussion has to evolve. And the measures to protect child online safety, they have to evolve keeping this into consideration. The second thing is then, how do we define a safe digital space? So the term refers to an online environment where individuals, especially children and vulnerable groups, can interact, communicate, and engage in various digital activities without having the risk, without experience the risk of harm, exploitation, or abuse. Five years ago, approximately, there there was a study that I was just looking up for, and almost a third of digital interactions or a third of internet users were children, and this was five years ago. I’m not sure, I did not come across, maybe the panelists have a latest number, but I’m sure it’s half of the population or half of the internet users are. So keeping this into consideration, last year we did some work on digital rights, and one of the streams of our work focused on online safe space, especially for children. We developed through consultation, and I will come back to it, how did we do it, but we developed a paper that is available on the DCO website. I welcome you all to have a look at that paper, and in that paper, we explored different dimensions of what are the threats and challenges, as you could see on your screen, that the children are subjected to, and what are the different categories, who are the stakeholders who can play a part in protecting the safe space for children, and then we furnished some policy recommendations based on that. So OECD categorizes the risks into three categories, the risks that are associated with technology, the ones that are associated related to the customer experiences, and the risks related to privacy and security. We all know, and I think we will get, so we have social media representation here, so we’ll explore what is being done to counter the effects of addictive behaviors of social media, especially with the endless scrolling feature, then we have the immersive technologies, for example, metaverse, and a lot of virtual technologies that are coming, so what could be the impact, for example, if the amount of data that is being collected, or that could be collected on a child, or anybody who is using those technologies, and how it could be used, and how it could be protected, we will explore how that risk can be effectively addressed, and then we, cyber bullying and social media, et cetera, bullying, et cetera, they’re all the risks that the children face in today’s world, and hopefully with our discussion, we will address some of those, and we’ll see what is being done to address them. So then, normally, and you could see on this slide, this is, it could be educational institutions, or it could be any organizations that are dealing with the children’s data, or who are dealing with the children, so the data leaks, for example, from the education system is a growing concern, which can then, if it lands in the wrong hands, could be used for harmful manipulation of kids, for example, or the ineffective crisis management by the institutions, or by the organization, can lead to reputational damages, lack of efficient incident management could cause disruption from the education system for the kids that is highly dependent on the online interfaces now. So the paper then comes to present some policy recommendations for four kind of stakeholders. So, for example, schools and educators should support the programs and helplines for children to create awareness campaigns, on cyberbullying especially. The role of parents is moving beyond just implementing the parental control, and it’s about parents’ involvement, and I’ll give you a small example towards the end of my conversation here on that, but it’s not just that you yell at them and say, your screen time is over, and then disconnect yourself, that’s not working anymore. Government, for example, so there is a concern about the targeted ads, about the endless scrolling, about the transparency of what the kids say, so governments have a lot to do there. And private sector, the most important thing is, and this is for all, is to involve children, or involve the young people into whatever is being designed at them. Okay, now the boring stuff aside, so last night I went home, as I said, that I just got my 10 years old, and I said, look, I’m preparing something for tomorrow, so could you give me a few points that I can bring up? So she went back, and she wrote this piece of paper for me, and I think this is the, and it was not meant to be shown, she just wrote this piece of paper for me, and this was for me to copy the points here, and I just looked at it, I just pulled it out from my notebook, and I think to conclude my intervention here, I would just read what she said. Just read in whatever grammatic mistakes or whatever you find it. So, number one, she says games. So she categorized her online interaction after we had this conversation that, you know, how do you look at the online space for games? She says problem is cyberbullying. Now she, a 10 years old, is aware of cyberbullying before looking it up, and I had no idea about it. And she says the solution is to report the person, then talk to an adult, and who you feel comfortable with. So this is her recommendation, right? Then she writes apps. Calling apps is a subcategory that she produced, and the problem, she says, is random people calling or texting you. She does not have a cell phone. So I was quite surprised to see this, that the ways of interaction that is subject to, of course she has a tab, she interacts with people, so this is a problem that she identified, and the solution is ignore and block. So we need the kids, they are smart, but we really need to reinforce this. Then she says watching apps. Of course, this is the content that is available online, and the problem is inappropriate content. So we try to protect them from any inappropriate content, but they are very well aware. And the solution, she says, is report the, okay, report the channel, and dislike the video. And then she says, if you are a minor, turn kids mode on. By the way, I’m, okay. So then comes the final thing is social media, and the problem, she said, is addiction. And the solution is put a screen time limit, which I normally do, and go outside regularly. So the importance of the physical activities is really important. The final one is, again, on the social media, she says hacking. Make the system stronger so the hackers cannot break into people’s accounts. So I would really finish my speech with this, because last night at around 12.30, when she gave me this paper, she has school holidays, so she’s allowed to keep awake until late, I said, okay, you know, I’m done. I just put this paper, I’ll just read it out in front of the panel today. So with now, with this, thank you very much. I pass it on to our able moderator, Mr. Philip Nahas now. We went through the moderation sessions, and the floor is all yours now. Thank you. Thank you very much, Ahmed, and.

Philippe Nahas: Thank you. Testing, testing. Oh, hi, now. Ahmed, and special thanks to little Miss Binder for insights straight. Good, again, everyone, my apologies for joining this session late today. And I’ll start by saying that I got from my child’s nursery, which is that we overprotect children in the world, and we underprotect them in the virtual world. And today, we’re here to discuss that, to understand how can we provide the right protection for children across the virtual world. And for that, I have a very distinguished panel joining me today. I’m gonna take back my seat because I feel I’m disconnecting here. Excuse me while I do this sitting down. And again, let’s see. Hello, testing. Very well. Okay. Opening my laptop and my notes for today. Moments to introduce everyone. So maybe it’s starting on, excellent, excellent. So maybe we can take some time to go over a few questions. The last, Miss Binder gave was, secure your system to that, I mean. And I think this is, you know, Haitham, this is your area of predilection. So my question to you is, maybe if I put it in my, what are the do’s and don’ts for institutions when it comes to cybersecurity and environments? Thank you.

Ahmad Bhinder: Let me start by saying, yeah. Mine is not working? Okay. So let me start by saying, it’s very personal to everyone who has like children because like, it impacts everyone. society on the planet and when it becomes personal we have to put the right measure. It becomes even more an obligation for all of us to stand and to protect the most vulnerable part of our society which is children. Now to do that there is an obligation on multiple stakeholders the way we look at this in PwC. So there’s a role for international communities and we have seen for example Saudi taking a big step last year in arranging what we call a child protection conference or summit which took place in Riyadh where they looked into international collaboration to look into this international challenge. Now if you go down the role of government I’m not going to talk about that then also schools, parents, etc. Now if we zoom into the private sector where I belong so we categorize the private sector into three categories the way we look into that framework. So the first category is social media, gaming platform and those platforms or like mobile operator or technology manufacturers who interact directly with children. This is the first let’s say layer of technology or private sector that interact with with children and there is a big role that these players should play in terms of many factors let’s say different risks cyber bullying so how do you how do you minimize that by having the right functions from from social media platforms and we’ll talk about that from meta we’ll hear about that from meta so how do you control comments how do you control the ability to block certain user not all platforms they have this so this is a journey that all platforms should adopt that kind of behavior it’s it’s it’s like we take it by default by leading players like meta or or Instagram and stuff like that but others should also follow. There’s inappropriate content for example inappropriate content then here comes the role of AI where not just AI but even a human moderation is needed to control that and to filter content for children similarly when you talk about privacy privacy is very important that we have the right privacy configuration for that these platforms adopt in order to make sure that they have this by default it’s no longer like a feature it should come by default in these platforms two-factor authentication for for users for children even it’s it become it should become easier and easier it should become again a default setup that kind of obligation on those players is very important activating features like parental control screen time like what your daughter suggested and also we recommend that parents and children they use the same platforms because that makes it easier for parents to control what their what what their children watch or limit screen time or ability to download apps and stuff like that then the second category of private sector would be companies service providers technology vendors these guys they work in the back let’s say to serve the the entities that they they deal with children directly these companies they need to invest more in R&D they need to work with law enforcement’s they need to work with regulators to help them drafting these kind of policies this is the kind of consulting work that companies like PwC do vendors they have like big obligation to invest more in ethical AI and stuff

Philippe Nahas: like that and finally we look into corporates or organizations in the private sector they also have a role to play adopting compliance and privacy laws protecting young generations and let’s not one of the challenges we talk about it’s an interesting one is the new generation is very like let’s say they have this mentality of hacking and not adhering to like corporate kind of policies that’s new generations like this how do you deal with that kind of behavior how do you make sure you give them the best experience but at the same time it’s safe and it does not compromise the the corporate policies as well you very much thank you for that you you spoke about corporate policies and I think that’s a very important point to touch on and maybe I’ll turn it to Mo to give us his view about whether these corporate policies should be more about adults setting the path to safety for children or are these also about empowering children themselves to self-govern their safety what’s your

Mohamed Isap: view on that Mo? Can you hear me okay? I think the technology is struggling at the moment. I can give you a perspective just from my let’s swap. So I’ll give you a perspective from what we do in in the UK so we we have a cyber eSports Academy seven and a half thousand young people transition through those academies across the country and one of the things I will say starting point is that we cannot just use a single definition of children and think that that is be all and end all in terms of policy thinking you’ve got a spectrum of children those from disadvantaged backgrounds to those from more affluent backgrounds there is a very different exposure to risk in those socio-economic contexts there’s also the way young people are from neurodiversity to learning and learning styles and how they interact into that world and and unless we understand the nuance no policy can ever work and it’s got to be at that sort of level of hyper local to hyper child in the context of how that child is engaging with the wider world and the digital world is just part of the oxygen that they breathe so what we found in our cyber eSports academies is that you’ve got young people who are feeling empowered and belonged and and I think that one thing that heartens me that all the surveys that I’ve seen and and I’ve experienced from those young people that go through my academies is that they do know right from wrong in fact to be honest with you I think they’re probably better than our generation was in terms of understanding right from wrong it’s just the world is more gray than it was black and white in our day and and let’s be honest in in our ability to try you know and you’ve heard already that we all have limitations of our knowledge about this world and you know any one of us of a certain generation that had sex education at school I don’t think anybody really learned about sex at school they just experienced it themselves and learn for themselves I think this is the same issue young people are not listening in the classroom they’re just experiencing it for themselves the difference is if you give them a place where they can be empowered in a safe place and what we do with our cyber eSports academies is that we give them the belonging of the positive elements of this world and how they can be empowered both in terms of understanding their their futures but also understanding that there’s a physicality with that digital world and making friends and connecting with peers kind of you know is augmented with the digital world not just exclusively in the digital world and those two things are are critically important so I think you know from a from an understanding of young people point of view and and and what we do to empower them to to you know we have cyber teams in each school and and and I believe that you know we need to get to a point where young people will govern this space themselves if you give them the tools and the ability to do so and and like we have you know in every school in every country young people will be selected as head boy and head girl and prefects and monitors I think there’s a space now for schools to really identify digital advocates to be able to govern across their peer group and and give support to the educators because the educators are really struggling to keep up with the nuance and the complexities even primaries I mean I’ve got you know 16 primary schools in my trust and if I speak to the educators in the primary schools they’re at a loss because the sophistication of a year five or year six child is way beyond the capacity of a key stage to teacher who really does not understand truly the complexity so they might deliver a curriculum but that is nowhere near where it needs to be and that confidence needs to emanate from both educators and empowerment for young people so from my point of view what we found successful across our cyber and esports academies we deliver cyber first for GCHQ is when we create a safe space for them to be empowered I think you get some really brilliant outcomes but I just say that the more deprived the more disadvantaged the young person the more vulnerable they are so that’s the point at which we have to do more because a parent can have all of the sort of tools if they’re educated if they’re from a decent socio-economic background but I think one thing I would say as a parent and my children have grown up now but I would just say that parents need to empathize more I don’t think that there is this preaching mentality that’s going to ever work I think there’s a confidence and a openness to understanding and also for parents to want to learn rather than try and educate all the time as well thank you very much Mo and I think you’ve you’ve touched on a good point it’s how do we provide a safe space for children to experiment and how do we give them the tools to do so across the board I feel that the first online space that children tend to experiment with is social media today and maybe that’s the perfect segue for question to Shivnath here and it’s you know in terms In terms of that space, in terms of the environment, what’s META’s approach to ensure the safety and the well-being of children online, especially with respect to the products that you provide? I’m going to share my mic with you.

Shivnath Thukral: Thanks so much. And firstly, I want to thank all my panelists and you for framing this issue in a very forward-looking manner. I can assure this room that usually when I’m in panels, it starts with a very attack mode. What I’m glad to hear is that at least you all are thinking of solutions and how to make it better rather than looking at blaming anyone. And everybody talked about their parental experience. I have a 10-year-old son and a 7-year-old daughter. And trust me, I think despite me working for META, they know much more than I do when it comes to the world of technology. Sometimes I have to look at them to get some tricks caught. And yet, what they don’t realize is there is something called the cloud. So when they’re sitting back in India and working on their iPad, I can see them clicking pictures on the iPad on the photo booth and uploading, thinking that I’ll not get to see. It’s just that the iCloud account is linked to my phone. And I see these photo booth pictures appearing on my phone. Having said that, that brings me to the moot point. What as companies are we doing, or what is our approach? I can confidently state that as a company, which is one of the leading companies in the world of social communication, the approach we have taken is very pragmatic, forward-looking, and builds solutions, some of which we have heard already and some of which are already existing through the product. You cannot bring in a sheet of paper and tell people to say, these are the features that can happen. These features are already there, whether it’s about nonstop addiction, nudging on alternate content, sleep time, parental controls, which brings me to a more fundamental issue. You can do parental controls. The question we have to ask is, do our children want us to be allowed to implement those parental controls? Will they like it if you tell them, give me the phone, let me put it on, and now I have some control? They don’t even want to share their account. That’s the reality. You can have parental control, but kids don’t like to share their account details, right? So then what do you do? So I think the more fundamental approach is think it through the design level. So someone rightly said, we’ve launched something called teen accounts, where the default feature of a teenager will be on a parental control site. It is not like they have to give it. So the default feature, if you are below 16, will be on a parental control site. I think as a tech company, if I don’t use the tech to install these solutions, we have a problem, and that’s why I think we have taken the right measures. But I want to share one thing with this room. This will never be a 100% job done. It cannot be. Reason, just like in the real world, bad actors are everywhere. In the virtual world, bad actors are everywhere. But we need to think through what you read from your piece of paper, unwanted contact. On Instagram, for example, we do not allow anybody who you don’t know to contact you, to message you or whatever, and yet, we see instances of cyber bullying or unwanted contact happening through different forums, because the bad actors are on the prowl. So the question is, as parents, how vigilant are we? Like Mo said, the tools or the thinking of yesterday cannot confront the issues of today. So are we upping our game as parents? I find it a challenge, despite being in a tech company. Sometimes we are not able to gauge what the bad actors are going to do. Hacking impersonation, it is not just a child or a youth issue. How many people in this room have not been hacked? We are all adults. Please tell me the truth that you’ve never been hacked. Do you have your two-factor verification on on your Instagram or WhatsApp? I’m sure many of you don’t. Why? This is despite us being adults, we don’t try to go and do it. So you imagine your child should be doing that. So the awareness and the knowledge level, how we are doing. One fundamental approach from the safety of the youth, which we feel we very strongly believe in, is the framework of preventing, controlling, and responding. We take several measures, including deploying a lot of AI tools, looking at keywords to make sure anything bad, before it happens, we can prevent it. Once a bad incident has already happened, it’s too late. We have lost that game then. So prevention is the most critical thing of taking down accounts, which are usually on the prowl, et cetera. Then is giving user control, which is related to all the features, et cetera, that we have. The third one is as critical, which is responding. How fast are we able to give you ability to report to us? And how fast are we responding to it? Are we working with enough civil society organizations? Like we have a program ongoing with NECMEC where we share CSAM data to make sure that law enforcement agencies across the world are able to work with each other. So I think the prevent, control, and respond mechanism is super critical. And the last piece I will say is, on more egregious issues like CSAM, et cetera, we take it very seriously. We are invested heavily on that. And I think we work across multiple agencies across the world inside our company. We have many former law enforcement officials who work with us, officers who work with us, and we deploy a range of technology to make sure that we are able to prevent bad actors from being on the platform. It is not in our interest. I mean, I cannot ask you to be user of my platform if I can’t keep our children safe. I’m a parent. As a parent, it is my responsibility. And in India, trust me, I show up in front of every regulator to be asked very, very tough questions. But I can also say very proudly that we play on the front foot by stating what we do, running public affair campaigns for awareness, et cetera. But at the end of the day, my product has to do the talking, and that’s what we are focused on.

Philippe Nahas: Thank you very much. Thank you. I think you’ve made a very good point on the engagement with the regulators, and you’re constantly on that. Would you care to elaborate a bit more about… We have Afroz online from UNICEF. We do, I suppose. Yes, we do. So should we invite her? Yeah, I think she would be absolutely next. At this point, I think if we’re talking about policy stakeholders, I think it’s a good idea to talk to Afroz. And I think to hear from you as well, Ahmed, from the NGOs perspective, from the policymakers perspective. And to begin with, if Afroz is with us, and she can hear us, right?

Ahmad Bhinder: Okay, very good.

Afrooz Johnson: So maybe Afroz, the question to you here, which is, what are the most critical interventions that governments in various countries can make to protect the children online? And this is from your experience as UNICEF. What can you tell us? Yeah, thank you so much. And sorry I didn’t meet you at the outset, Philippe, but it’s great to be part of this conversation. So I think I would just highlight four key challenges, and then the responses that we’re advocating for and supporting government with around the world. So a lot of them have already been touched on, so forgive me for repetition. But the first, I think, when we look at this issue, is the design and the operation of digital services and platforms. We’ve heard how there are bad actors on these platforms, but also there are design features that make more risks for children, and not just children, as we’ve heard. So we see that there are user engagement prioritized over child safeguarding, for example. The ways in which platforms are designed facilitate this rapid and wide-ranging spread of hateful and abusive content, as another example. So the ask there for government and for regulators is requiring the tech sector to undertake assessments, and I think this came out in the DCO report as well. And what UNICEF advocates for is for child rights due diligence, and particularly child rights impact assessments, so that companies can, rather than being reactive and trying to retrofit after the fact, can be more proactive and embed this concept of child rights by design, which is inclusive of safety by design and privacy by design. So all of these are prioritized in the development of digital products and services. So this is the first ask. For governments and policy makers is really prioritizing children’s rights in the design and governance of technology products, and this could be through regulation or other means. The second main challenge is, of course, that we know that laws have not kept pace with the rapid development of digital technologies. And then when we’re talking about criminal activity, there are also challenges in investigating and prosecuting cross-border cases. So the ask then, again, for government and what we’ve done around the world is really support a government to update laws and policies so that online violence is adequately criminalized and that they’re also future-proofed against rapidly evolving technologies. The third challenge that we see around the world is that social services and law enforcement often lack the resources and the… expertise to to address these these new challenges through digital technologies. So this makes it hard to support you know at-risk children and also it makes it hard to identify perpetrators and I really I really appreciated the intervention earlier just you know talking about you know particular groups of children that are at risk and the solutions for supporting them you know need to happen kind of in real life if I can put it like that so we need strong social services in order to identify and support children. In many countries mental health services are often insufficient so this can leave children without support which again you know it makes them vulnerable but it also means that they they don’t get adequate support after if and when something you know happens online so I think the ask there the government is really equipping law enforcement, educators, social services and others to really identify respond to and prevent forms of online harm. And then finally I would say there are challenges with respect to some harmful kind of social norms and limited public discussion that we have in many communities around the world you know there are taboos around talking about certain topics like sexual abuse and these things can make it difficult for victims you know to speak out and we know that you know there are forms of sexual abuse that are facilitated by technologies and when we have these you know limited public discussion and we have these harmful norms it can really constrain kind of the efforts to prevent and respond. So as well as you know the efforts that we’ve spoken about for governments supporting children’s digital literacy and online safety we also need those broader educative initiatives that are designed to foster healthy relationships in early adolescence those that are designed to challenge kind of harmful gender norms to motivate help-seeking and really support children you know if if their peers disclose to them how can they react to that. And of course all these other educational initiatives with parents and caregivers and educators. So I’ll stop there for now. Thank you.

Philippe Nahas: Thank you very much Afroz. I think it’s it’s very interesting that you spoke about you know the your collaboration with the various governments and maybe we can hear it from another perspective. The DCO is all about collaboration between various governments specifically the the digital arms of these governments. Maybe you can tell us in your experience Ahmad how does that collaboration contribute to the safety of children online? Thank you

Ahmad Bhinder: Philip. I think I will start with this. So there’s one of one topic across the policy domains we are public policy practitioners where there is a consensus that things need to be done. So children need to be protected. So we have we have come to a level of consensus where you know a lot of other policy debates are towards you know taking an approach or another. There is a global consensus that there needs to be something needs to be done to protect the children and especially while they are vulnerable online. Now then the question is how do you how do we activate the different stakeholders? So the role of the intergovernmental organization like UNICEF is broadly policy guidance or proposing initiatives or proposing different measures. The role of the technology companies is to use technology to to enhance or advance that agenda. So the companies the organizations are actually involved in these discussions and this dialogue. So I’ll give you an example last year and this is where this this session and this policy paper came from. We have a program called digital space accelerators program where we where we pick up the pressing issues in the digital economy and then we have global roundtables and discussions across all the regions and we bring in experts to to really talk about and discuss the issues and how to collaboratively solve those issues. So so so in so this is one role of the intergovernmental organizations but I think there’s a there’s a need for a more concerted effort on the national levels as well. So right now of course the policy landscape and the legal landscape and the regulatory landscape are different across different across different countries and the level of maturity is quite different across different countries. So one of the recommendations that we or one of the things that we picked up from last year’s collaborative discussion was that on a national level considering the or identifying different stakeholders that could include educators of course there are there common stakeholders like government policymakers and like technology companies that are beyond borders but to get them together come up with with children online protection strategy. So that would include what are the rules that need to be made, what are the rules that need to be tweaked, what are the initiatives. So it has to be a concerted effort on national levels while learning from the best practices of course we have a whole bunch of them and then having a national championship in across those nation nations and then those that that could then be expanded on a regional or international level. Thank you very very much. I think we’re all out of questions for today. I’ll open it up to the audience if there are a few questions that you have for any of our panelists today. Please raise your hand and we’ll give you the microphone.

Philippe Nahas: We’ve got time literally for a couple of questions so I think I’ll start with the lady and then pass it on to you. Yes thank you so much. My name is Jutta Kroll

Audience: from the German Digital Opportunities Foundation. I’m a children’s rights advocate and first let me congratulate that you have today a manual with only male speakers because several years ago we would have been talking about safety for children only among women. So it’s kind of an achievement that no woman is on your stage. I would like to refer you to the UN Convention on the Rights of the Child because you have somehow picked upon that issue. I saw the eSports most speaking about the eSports and that is related to the children’s rights to leisure time to play in article 31 but also I thought that some of the measures you’ve been talking about would touch upon children’s privacy when and that is article 16 of the UN Convention. So when we put up parental controls when you have a look at the photos your children upload to the cloud without them knowing that would touch upon their privacy. So we need to take care that we balance privacy of children as well as their right to be protected. Thank you for listening. Thank you. Thank you very much for sharing. Thank you so much for your presentation. I’m actually currently a high school student in Massachusetts so as like as a literal child perspective I want to elaborate a little bit on what you said about how children indeed know the difference between right and wrong. I completely agree with that after being exposed to all kinds of like education about digital safety and stuff but one thing I observed in my school is that despite the fact that we we all have been educated about digital safety we we still become we still become vulnerable to it and some people even go on to become the attacker. So I think the reason for that is that children are evolving and technologies are evolving but the education on digital safety has to keep pace with those evolution so I think it’s also time for us to like revolutionize our education for our children. Thank you. Thank you very much for sharing. Maybe I’ll conclude in a minute

Philippe Nahas: just to say that we’ve we’ve seen in short that the digital world is actually a an image of the real world whereby you know we we determine there are bad actors in both and we need protection in both. Certainly the policies out there we’re not up to speed we have been playing catch-up for the past few years and we are still catching up with a very fast paced evolution of the the digital realm. One thing that I think becomes apparent from our panelists here is the importance of looking at it holistically from international organizations to governments to parents to law enforcement as well as policymakers all the way to empowering children themselves and I think as you well said the the difference between right and wrong is something that should be infused at a very early age and it should be holistically part of what we do as we conceive new policies to go ahead. Thank you very much everyone for listening. I appreciate your time and hope to see you again soon.

A

Ahmad Bhinder

Speech speed

140 words per minute

Speech length

3213 words

Speech time

1369 seconds

Children are digital natives and live in a digital world

Explanation

Ahmad Bhinder emphasizes that children are born into and immersed in the digital world. For them, the digital space is not separate from their everyday reality.

Evidence

Bhinder shares an anecdote about his conversation with his children, who struggled to understand the concept of ‘online’ as separate from their normal life.

Major Discussion Point

Challenges in protecting children online

Risks include cyberbullying, addiction, inappropriate content, and privacy concerns

Explanation

Bhinder outlines various risks children face in the digital world. These include cyberbullying, addiction to digital devices or platforms, exposure to inappropriate content, and concerns about privacy.

Evidence

He references a paper developed by DCO that explores different dimensions of threats and challenges children are subjected to online.

Major Discussion Point

Challenges in protecting children online

Agreed with

Shivnath Thukral

Afrooz Johnson

Agreed on

Children face multiple risks online

Involve children in designing solutions for their online safety

Explanation

Bhinder suggests that children should be involved in the process of designing solutions for their online safety. This approach recognizes children’s understanding of the digital world and their unique perspectives on the challenges they face.

Evidence

He shares insights from his 10-year-old daughter’s handwritten notes on online safety, demonstrating children’s awareness of issues like cyberbullying and inappropriate content.

Major Discussion Point

Approaches to ensuring children’s online safety

Develop national children’s online protection strategies

Explanation

Bhinder advocates for the development of comprehensive national strategies to protect children online. These strategies should involve various stakeholders and address the specific needs and contexts of each country.

Evidence

He mentions the need for a concerted effort at the national level, involving educators, policymakers, and technology companies to create a children’s online protection strategy.

Major Discussion Point

Approaches to ensuring children’s online safety

Agreed with

Haitham Al Jowhari

Afrooz Johnson

Agreed on

Need for multi-stakeholder approach

H

Haitham Aljabry

Speech speed

148 words per minute

Speech length

53 words

Speech time

21 seconds

Private sector should invest in R&D and work with law enforcement and regulators

Explanation

Aljabry emphasizes the role of the private sector in protecting children online. He suggests that companies should invest in research and development, and collaborate with law enforcement agencies and regulators to develop effective policies and solutions.

Evidence

He mentions that companies like PwC work with law enforcement and regulators to help draft policies related to online child protection.

Major Discussion Point

Role of different stakeholders in protecting children online

Agreed with

Ahmad Bhinder

Afrooz Johnson

Agreed on

Need for multi-stakeholder approach

M

Mohamed Isap

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Create safe spaces for children to be empowered and experiment online

Explanation

Isap advocates for creating safe online environments where children can feel empowered and experiment safely. He believes this approach allows children to learn and develop digital skills in a protected setting.

Evidence

He references the cyber eSports academies his organization runs, where children can engage with technology in a safe and empowering environment.

Major Discussion Point

Approaches to ensuring children’s online safety

Schools should identify digital advocates among students to support peers

Explanation

Isap suggests that schools should identify and empower digital advocates among students. These advocates can help govern the digital space and provide support to their peers.

Evidence

He draws a parallel with existing school roles like head boy/girl and prefects, proposing a similar system for digital advocacy.

Major Discussion Point

Role of different stakeholders in protecting children online

Empower children to self-govern their online safety

Explanation

Isap argues for empowering children to take an active role in governing their online safety. He believes that given the right tools and support, children can effectively manage their digital experiences.

Evidence

He mentions that surveys and his experience with young people in cyber academies show that they have a good understanding of right and wrong in the digital world.

Major Discussion Point

Balancing protection and empowerment of children online

Differed with

Shivnath Thukral

Differed on

Approach to parental controls

Consider socio-economic contexts when developing online safety policies

Explanation

Isap emphasizes the importance of considering socio-economic factors when developing online safety policies. He points out that children from different backgrounds may face different levels of risk and require tailored approaches.

Evidence

He mentions the spectrum of children from disadvantaged to affluent backgrounds and how their exposure to risk varies in different socio-economic contexts.

Major Discussion Point

Approaches to ensuring children’s online safety

S

Shivnath Thukral

Speech speed

175 words per minute

Speech length

1185 words

Speech time

405 seconds

Bad actors are present in both the real and virtual worlds

Explanation

Thukral points out that malicious individuals exist in both physical and digital spaces. This reality makes it challenging to create a completely safe online environment for children.

Evidence

He mentions instances of cyber bullying and unwanted contact happening through different forums despite safety measures.

Major Discussion Point

Challenges in protecting children online

Agreed with

Ahmad Bhinder

Afrooz Johnson

Agreed on

Children face multiple risks online

Implement a framework of preventing, controlling, and responding to online threats

Explanation

Thukral outlines Meta’s approach to child safety, which involves preventing threats, giving users control, and responding quickly to issues. This comprehensive framework aims to address online risks at various stages.

Evidence

He mentions the use of AI tools to prevent harmful content, user controls for safety features, and rapid response mechanisms for reporting issues.

Major Discussion Point

Approaches to ensuring children’s online safety

Social media platforms should implement safety features by default

Explanation

Thukral advocates for social media platforms to have safety features enabled by default, especially for younger users. This approach ensures a baseline level of protection without relying on user action.

Evidence

He mentions Meta’s introduction of ‘teen accounts’ where parental control features are enabled by default for users under 16.

Major Discussion Point

Role of different stakeholders in protecting children online

Parents need to be more vigilant and empathetic towards children’s online experiences

Explanation

Thukral emphasizes the need for parents to be more engaged and understanding of their children’s digital lives. He suggests that traditional parenting approaches may not be effective in the digital age.

Evidence

He shares personal experiences as a parent, noting that despite working for Meta, his children often know more about technology than he does.

Major Discussion Point

Role of different stakeholders in protecting children online

Implement parental controls while respecting children’s privacy

Explanation

Thukral discusses the challenge of implementing parental controls while respecting children’s privacy. He points out that children often resist sharing their account details with parents, making it difficult to apply traditional parental control measures.

Evidence

He mentions the reluctance of children to share their account details with parents, highlighting the need for alternative approaches to online safety.

Major Discussion Point

Balancing protection and empowerment of children online

Differed with

Mo Isap

Differed on

Approach to parental controls

A

Afrooz Johnson

Speech speed

143 words per minute

Speech length

910 words

Speech time

379 seconds

Laws have not kept pace with rapidly evolving digital technologies

Explanation

Johnson points out that legal frameworks have not evolved as quickly as digital technologies. This lag creates challenges in effectively addressing online threats to children.

Evidence

She mentions difficulties in investigating and prosecuting cross-border cases of online crimes against children.

Major Discussion Point

Challenges in protecting children online

Agreed with

Ahmad Bhinder

Shivnath Thukral

Agreed on

Children face multiple risks online

Social services and law enforcement often lack resources and expertise to address online challenges

Explanation

Johnson highlights that social services and law enforcement agencies often don’t have the necessary resources or expertise to effectively address digital threats to children. This gap makes it difficult to support at-risk children and identify perpetrators.

Evidence

She mentions the need for strong social services to identify and support children, and the insufficiency of mental health services in many countries.

Major Discussion Point

Challenges in protecting children online

Require tech companies to conduct child rights impact assessments

Explanation

Johnson advocates for requiring technology companies to conduct child rights impact assessments. This approach aims to proactively consider children’s rights and safety in the design and development of digital products and services.

Evidence

She mentions UNICEF’s advocacy for child rights due diligence and the concept of ‘child rights by design’ in the development of digital products and services.

Major Discussion Point

Approaches to ensuring children’s online safety

Governments should update laws and equip law enforcement and social services

Explanation

Johnson emphasizes the need for governments to update laws to address online violence and to provide resources and training to law enforcement and social services. This would enable better prevention, identification, and response to online threats to children.

Evidence

She mentions UNICEF’s work around the world to support governments in updating laws and policies related to online violence against children.

Major Discussion Point

Role of different stakeholders in protecting children online

Agreed with

Ahmad Bhinder

Haitham Al Jowhari

Agreed on

Need for multi-stakeholder approach

A

Audience

Speech speed

132 words per minute

Speech length

361 words

Speech time

163 seconds

Balance children’s right to protection with their right to privacy

Explanation

An audience member points out the need to balance protecting children online with respecting their right to privacy. This highlights the complexity of implementing safety measures without infringing on children’s rights.

Evidence

The speaker references the UN Convention on the Rights of the Child, specifically mentioning Article 16 on children’s right to privacy.

Major Discussion Point

Balancing protection and empowerment of children online

Continuously evolve digital safety education to keep pace with technology

Explanation

A high school student in the audience emphasizes the need for digital safety education to evolve alongside technology. Despite existing education, students may still become vulnerable or even become attackers, indicating a need for more effective and up-to-date approaches.

Evidence

The student shares personal observations from their school, noting that despite digital safety education, some students still become vulnerable or engage in harmful behavior online.

Major Discussion Point

Approaches to ensuring children’s online safety

Agreements

Agreement Points

Children face multiple risks online

Ahmad Bhinder

Shivnath Thukral

Afrooz Johnson

Risks include cyberbullying, addiction, inappropriate content, and privacy concerns

Bad actors are present in both the real and virtual worlds

Laws have not kept pace with rapidly evolving digital technologies

The speakers agree that children face various risks online, including cyberbullying, addiction, exposure to inappropriate content, and privacy issues. They acknowledge the presence of bad actors in digital spaces and the challenge of outdated laws in addressing these risks.

Need for multi-stakeholder approach

Ahmad Bhinder

Haitham Al Jowhari

Afrooz Johnson

Develop national children’s online protection strategies

Private sector should invest in R&D and work with law enforcement and regulators

Governments should update laws and equip law enforcement and social services

The speakers emphasize the importance of collaboration between various stakeholders, including governments, private sector, law enforcement, and social services, to effectively protect children online.

Similar Viewpoints

Both speakers advocate for empowering children and parents to take an active role in managing online safety, rather than relying solely on external controls.

Mo Isap

Shivnath Thukral

Empower children to self-govern their online safety

Parents need to be more vigilant and empathetic towards children’s online experiences

Unexpected Consensus

Balancing protection and privacy

Shivnath Thukral

Audience

Implement parental controls while respecting children’s privacy

Balance children’s right to protection with their right to privacy

There was an unexpected consensus between a tech company representative and an audience member on the need to balance child protection measures with respecting children’s privacy rights. This highlights a shared recognition of the complexity in implementing safety measures without infringing on children’s rights.

Overall Assessment

Summary

The main areas of agreement include recognizing the multiple risks children face online, the need for a multi-stakeholder approach to online child protection, and the importance of empowering children and parents in managing online safety.

Consensus level

There was a moderate level of consensus among the speakers on the key challenges and general approaches to protecting children online. This consensus suggests a shared understanding of the complexities involved and the need for collaborative efforts. However, there were also nuanced differences in proposed solutions and emphases, indicating that while there is agreement on the broad issues, there is still room for debate on specific strategies and implementations.

Differences

Different Viewpoints

Approach to parental controls

Shivnath Thukral

Mo Isap

Implement parental controls while respecting children’s privacy

Empower children to self-govern their online safety

Thukral advocates for implementing parental controls by default, while Isap emphasizes empowering children to self-govern their online safety.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the balance between protection and empowerment of children online, and the specific approaches to implementing online safety measures.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the importance of protecting children online but have slightly different approaches or emphasize different aspects. This suggests a general consensus on the need for action, which could facilitate collaborative efforts to address the issue of children’s online safety.

Partial Agreements

Partial Agreements

All speakers agree on the need for safe online spaces for children, but differ in their approaches. Thukral emphasizes default safety features, Isap focuses on creating empowering environments, and Bhinder advocates for involving children in designing solutions.

Shivnath Thukral

Mo Isap

Ahmad Bhinder

Social media platforms should implement safety features by default

Create safe spaces for children to be empowered and experiment online

Involve children in designing solutions for their online safety

Similar Viewpoints

Both speakers advocate for empowering children and parents to take an active role in managing online safety, rather than relying solely on external controls.

Mo Isap

Shivnath Thukral

Empower children to self-govern their online safety

Parents need to be more vigilant and empathetic towards children’s online experiences

Takeaways

Key Takeaways

Protecting children online requires a holistic approach involving multiple stakeholders including governments, tech companies, parents, educators, and children themselves

There is a global consensus that action needs to be taken to protect children online, but policies and regulations are still catching up to rapidly evolving technologies

Empowering and educating children about online safety is crucial, while also implementing technological safeguards

Balancing protection with children’s rights to privacy and autonomy online is an important consideration

Online safety measures need to account for different socioeconomic contexts and evolve along with technology

Resolutions and Action Items

Tech companies should implement safety features by default in products used by children

Governments should update laws to adequately criminalize online violence against children

Child rights impact assessments should be required for tech companies developing products used by children

Schools should identify digital advocates among students to support peers on online safety

Develop national children’s online protection strategies in different countries

Unresolved Issues

How to effectively balance children’s right to protection with their right to privacy online

How to keep digital safety education evolving at the pace of technological change

How to address online safety challenges for children from different socioeconomic backgrounds

How to combat the problem of children becoming online attackers themselves despite safety education

Suggested Compromises

Implement parental controls while still respecting children’s privacy to some degree

Create safe online spaces for children to experiment and learn while still providing some oversight

Balance top-down safety measures with empowering children to self-govern their online experiences

Thought Provoking Comments

Children are digital natives… So for them, it’s all about digital. And that is why, when we think about them being digital natives, they live in the digital world.

speaker

Ahmad Bhinder

reason

This comment frames the fundamental challenge of protecting children online by highlighting how deeply integrated the digital world is for today’s youth.

impact

It set the tone for the discussion by emphasizing the need to understand children’s perspective and experiences in the digital realm.

We overprotect children in the world, and we underprotect them in the virtual world.

speaker

Philippe Nahas

reason

This succinct statement captures a key paradox in how society approaches child safety online versus offline.

impact

It prompted the panelists to consider the imbalance in protection measures and discuss ways to better safeguard children in digital spaces.

We cannot just use a single definition of children and think that that is be all and end all in terms of policy thinking… Unless we understand the nuance no policy can ever work

speaker

Mo Isap

reason

This insight highlights the complexity of creating effective policies for child online safety, emphasizing the need for nuanced approaches.

impact

It shifted the conversation towards considering more tailored and flexible policy solutions that account for diverse children’s experiences and backgrounds.

The approach we have taken is very pragmatic, forward-looking, and builds solutions… You cannot bring in a sheet of paper and tell people to say, these are the features that can happen. These features are already there

speaker

Shivnath Thukral

reason

This comment provides insight into how tech companies are proactively addressing online safety issues through product design.

impact

It brought a practical perspective to the discussion, highlighting existing solutions and the ongoing efforts of tech companies to improve safety features.

We need strong social services in order to identify and support children. In many countries mental health services are often insufficient so this can leave children without support

speaker

Afrooz Johnson

reason

This comment broadens the scope of the discussion by emphasizing the importance of real-world support systems in conjunction with online safety measures.

impact

It led to a more holistic consideration of child protection, linking online safety to broader social services and mental health support.

Overall Assessment

These key comments shaped the discussion by broadening its scope from purely technical solutions to a more comprehensive approach. They highlighted the complexity of the issue, emphasizing the need for nuanced policies, proactive design by tech companies, and strong real-world support systems. The discussion evolved from defining the problem to exploring multifaceted solutions involving various stakeholders, including governments, tech companies, educators, and parents. This resulted in a rich, nuanced conversation that acknowledged both the challenges and potential pathways for improving children’s online safety.

Follow-up Questions

What are the latest statistics on the proportion of internet users who are children?

speaker

Ahmad Bhinder

explanation

Updated data is needed to understand the current scale of children’s internet usage and inform policy decisions.

How can we effectively address the risks associated with immersive technologies like the metaverse for children?

speaker

Ahmad Bhinder

explanation

As new technologies emerge, it’s important to proactively consider their potential impacts on children’s safety and privacy.

How can we better involve children in the design of online safety measures and policies?

speaker

Ahmad Bhinder

explanation

Incorporating children’s perspectives is crucial for developing effective and relevant safety measures.

How can we address the different levels of vulnerability to online risks among children from various socio-economic backgrounds?

speaker

Mo Isap

explanation

Understanding and addressing these disparities is crucial for ensuring equitable protection for all children online.

How can we better equip educators to understand and address the complexities of children’s online experiences?

speaker

Mo Isap

explanation

Many educators struggle to keep up with rapidly evolving digital landscapes, impacting their ability to support children effectively.

How can we develop more effective ways to implement parental controls that children will accept?

speaker

Shivnath Thukral

explanation

Current parental control methods often face resistance from children, limiting their effectiveness.

How can we improve the speed and effectiveness of responding to reports of online harm to children?

speaker

Shivnath Thukral

explanation

Rapid response to reports is crucial for minimizing harm and maintaining user trust in online platforms.

How can we better support law enforcement and social services in addressing online child protection issues?

speaker

Afrooz Johnson

explanation

Many agencies lack the resources and expertise to effectively tackle digital child protection challenges.

How can we develop more effective educational initiatives to challenge harmful social norms and encourage help-seeking behaviors related to online safety?

speaker

Afrooz Johnson

explanation

Addressing underlying social norms is crucial for creating a safer online environment for children.

How can we ensure that child protection measures respect children’s right to privacy?

speaker

Audience member (Jutta Kroll)

explanation

There is a need to balance protection efforts with children’s rights to privacy as outlined in the UN Convention on the Rights of the Child.

How can we evolve digital safety education to keep pace with rapidly changing technologies and children’s behaviors?

speaker

Audience member (high school student)

explanation

Current educational approaches may not be sufficiently addressing the evolving nature of online risks and children’s interactions with technology.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.