Day 0 Event #12 Tackling Misinformation with Information Literacy

Day 0 Event #12 Tackling Misinformation with Information Literacy

Session at a Glance

Summary

This discussion focused on tackling misinformation and promoting information literacy in the digital age. The speakers, including representatives from Google and other tech policy experts, explored various strategies and challenges in addressing online misinformation.

The conversation began by highlighting how misinformation is a persistent and complex problem that has evolved, especially since the 2016 US election. Platforms have implemented various approaches, including labeling, fact-checking partnerships, and reducing the reach of potentially false content. However, these methods face challenges, as misinformation is often not entirely false and can be difficult to definitively label.

The speakers emphasized the importance of information literacy skills for users. Google has developed tools like “About this result” to help users evaluate sources and claims more easily. The discussion also touched on the impact of AI-generated content, noting that while it presents new challenges, many core information literacy principles still apply.

A key point was the need for a holistic, multi-stakeholder approach to combat misinformation. This includes efforts from platforms, governments, civil society, and users themselves. The speakers stressed that user preferences and behaviors play a significant role in exposure to misinformation, highlighting the importance of individual media consumption habits.

The discussion also covered specific issues like gender-based online violence and the effectiveness of information literacy efforts. The speakers acknowledged that while progress has been made, misinformation remains a complex issue requiring ongoing research, collaboration, and adaptation of strategies.

In conclusion, the speakers emphasized the need for continued experimentation, cross-sector collaboration, and a focus on empowering users with information literacy skills to navigate the evolving digital information landscape.

Keypoints

Major discussion points:

– Challenges of addressing misinformation, including its “sticky” nature and the difficulty of labeling content

– Google’s approach to information literacy, including tools like “About this result” and watermarking for AI-generated images

– The role of user preferences and behavior in encountering misinformation

– The need for a multi-stakeholder approach to information literacy that includes users, not just governments and tech companies

– Evolving challenges with AI-generated content and deepfakes

Overall purpose:

The goal of this discussion was to explore approaches to tackling misinformation through information literacy, focusing on strategies used by tech platforms and the challenges involved.

Tone:

The tone was informative and collaborative throughout. The speakers shared insights from their professional experiences in a constructive manner, acknowledging the complexity of the issues. There was an emphasis on the need for continued experimentation and multi-stakeholder cooperation to address misinformation challenges.

Speakers

– Jim Prendergast: Moderator, with the Galway Strategy Group

– Sarah Al-Husseini: Head of Government Affairs and Public Policy for Saudi Arabia for Google

– Katie Harbath: Founder and CEO of Anchor Change

– Zoe Darmé: Director of Trust Strategy at Google

– Audience: Various audience members who asked questions

Additional speakers:

– Lina: From Search for Common Ground and the Council on Tech and Social Cohesion

– Ian: From the Brazilian Association of Internet Service Providers

Full session report

Tackling Misinformation and Promoting Information Literacy in the Digital Age

This comprehensive discussion, moderated by Sarah Al-Husseini, Head of Government Affairs and Public Policy for Saudi Arabia at Google, brought together experts from Google and the tech policy sector to explore strategies for addressing online misinformation and enhancing information literacy. The panel included Katie Harbath, Founder and CEO of Anchor Change, and Zoe Darmé, Director of Trust Strategy at Google.

Evolving Challenges of Misinformation

The session began with an interactive quiz, highlighting the difficulty in identifying misinformation. Zoe Darmé shared research from Australia suggesting people’s accuracy in spotting false information is only slightly better than a coin toss, emphasizing the need for more sophisticated approaches to combating misinformation.

Darmé aptly quoted Kate Klonick, stating, “You can’t bring logic to a feelings fight,” underscoring the emotional aspect of misinformation consumption. The discussion highlighted the persistent and complex nature of misinformation, which has evolved significantly since the 2016 US election.

Platform Strategies and Their Limitations

The speakers discussed various approaches implemented by platforms to address misinformation, including labelling, fact-checking partnerships, and reducing the reach of potentially false content. They also mentioned pre-bunking as a strategy used by platforms. However, they acknowledged the limitations of these methods, particularly given that misinformation is often not entirely false and can be difficult to definitively label.

Katie Harbath pointed out that people interpret labels and warnings differently, which can sometimes lead to unintended consequences. She also highlighted the importance of understanding the different modes people are in when consuming content online, which affects how they interact with information.

Google’s Approach to Information Literacy

Zoe Darmé detailed Google’s efforts to promote information literacy, focusing on tools like “About this result” that aim to help users evaluate sources and claims more easily. She also mentioned a new Google feature that shows whether search results are personalized or not, empowering users with more context about their search experience.

The discussion touched on Google’s development of SynthID watermarking technology for AI-generated images, though Darmé acknowledged that such technical solutions are not foolproof, especially when considering issues like screenshots that could potentially circumvent watermarking.

The Role of User Preferences and Behaviour

A key insight from the discussion was the significant role that user preferences and behaviours play in exposure to misinformation. Darmé revealed findings from a recent study on search engines showing that users often encounter misinformation when they are explicitly searching for unreliable sources, challenging assumptions about algorithmic responsibility and highlighting the need for approaches that address individual media consumption habits.

Emerging Challenges: AI-Generated Content and Deepfakes

The speakers acknowledged the evolving landscape of misinformation, particularly with the rise of AI-generated content and deepfakes. While these technologies present new challenges, the panel emphasised that many core information literacy principles still apply. They stressed the need for ongoing research and adaptation of strategies to address these emerging issues.

Darmé also discussed Google’s measures to combat involuntary synthetic pornographic imagery, including efforts to remove such content and provide support for victims.

Multi-stakeholder Approach and Collaboration

A recurring theme throughout the discussion was the necessity of a holistic, multi-stakeholder approach to combat misinformation. The speakers emphasised that effective solutions require efforts from platforms, governments, civil society organisations, and users themselves. They highlighted the importance of cross-industry collaboration, especially as AI technology continues to evolve.

However, both Harbath and Darmé addressed the challenges of implementing multi-stakeholder approaches to information literacy, including issues of coordination, resource allocation, and measuring effectiveness.

Addressing Specific Concerns

The discussion also touched on specific issues, such as health and election misinformation, with Al-Husseini mentioning targeted approaches being developed by platforms. In her closing remarks, she emphasized the importance of addressing youth populations in information literacy efforts.

Unresolved Issues and Future Directions

While the discussion provided valuable insights into current strategies and challenges, several unresolved issues emerged. These included questions about how to effectively address user preferences for unreliable sources, balancing platform intervention with concerns about censorship and free expression, and measuring the long-term effectiveness of information literacy efforts.

Conclusion

The discussion concluded with a call for ongoing collaboration and experimentation in addressing the challenges of misinformation. The speakers emphasised the importance of balancing technological solutions with user education and critical thinking skills. They acknowledged the complexity of the issue but expressed optimism that multi-stakeholder approaches and continued innovation could lead to more effective strategies for promoting information literacy and combating misinformation in the digital age.

Session Transcript

Jim Prendergast: Thank you for coming. This is the session, Tackling Misinformation with Information Literacy. My name is Jim Prendergast with the Galway Strategy Group. I’m going to help kick it off and I’m also going to help monitor the online activity. Fair warning, we want this to be a highly interactive session. You will be taking a quiz at some point. No pressure, it’s pretty easy and there are no wrong answers. We really want people to learn from this session, walk away with some new ideas, some new thinking, and by all means, we welcome your questions and interaction. I’m going to now kick it off to Sarah Al-Husseini, who’s the Head of Government Affairs and Public Policy for Saudi Arabia for Google.

Sarah Al-Husseini: Great. Thank you, Jim, and thank you everybody for joining us this afternoon. I know it’s day zero and it’s the end of the day, so those of you who have made it this far, thank you for being here. As Jim mentioned, I’m Sarah Al-Husseini, I lead Government Affairs and Public Policy for Google in Saudi and Egypt. I think this session is hugely timely with everything that’s happening on a global scale and the amount of information that is present online. I hope that you will be very engaged with our session today. We have a wonderful speaker lineup. With that being said, we’ll start with a presentation by Katie Harbeth, Founder and CEO of Anchor Change on the challenges of addressing misinformation. We’ll go over to Zoe Darma, who is the Director of Trust Strategy at Google, who will present on Google’s approach to information literacy, and then we’ll go into a Q&A session. I’ll take the prerogative of being the moderator and the first few questions, and then hand it over to those of you in the audience who would like to engage as well. With that, I will hand it over to Katie. Katie, thank you so much for joining us, Zoe. Thank you as well. Happy to have you here.

Katie Harbeth: Yeah. Thank you so much for having me, and I’m sorry I can’t be there in person. But I wanted to start today by just sharing a little bit of the history of companies working around misinformation and some of the things that they have tried and how they approach this problem. Just to sort of ground ourselves, the current iteration of working on misinformation really started after the 2016 election. Misinformation has been around for quite some time in various forms and companies have been working with it and trying to combat it for quite some time. But after the 2016 election in the United States, there was a lot of stories about initially Macedonian teenagers spreading fake news to make money. And it wasn’t until later in 2017 that we also started to realize and find the Russian Internet Research Agency ads that were on, I worked at Facebook, so not only on the Facebook platform, but many other platforms. And this is what really spurred a lot of the companies, I can speak on behalf of the work at Facebook in particular, to start adding labels and working with fact checkers around trying to combat some of this. And misinformation is not just stuff that is around elections and hot button issues. Some of the earlier stuff too is things like a celebrity died in your hometown or those type of clickbaity headlines that the companies were trying to fight. The initial focus was also very much on foreign activity, so foreign adversaries trying to influence different elections around the world. The other important thing to remember about a lot of this is that much of this work is very much focused sometimes on the behavior of these actors, not necessarily the content as well. So this means that pretending to be somebody that they’re not on platforms that require people to use their real names, if they’re coordinating with other types of accounts that they’ve created to try to amplify things. So as we’re talking about this, it’s not just what they’re saying and if it’s false, but it’s also how they might be trying to amplify it in inauthentic ways. You can go to the next slide. There we go. So I think a couple of other things to remember too as we think about this. Misinformation is sticky and fast, which means that it can spread very, very quickly. and it’s something that very much sticks with people and it’s very hard and can be very hard to change their minds. Things can also, we find that most of the times, things are not completely false or completely true. There’s usually a kernel of truth in this with a lot of misinformation around it, which makes it a lot trickier for trying to figure out what to do because you can’t just fully label it false or true. You also have things like satire, parody, hyperbole that exist in many places that are perfectly legal types of speech and understanding the intention of the poster and what they mean for it to be and doing that at scale is something that is incredibly tricky for many companies in which to do. And overall, these platforms very much do not wanna be the arbiters of truth. They do not wanna be the ones that are making decisions of whether or not something is true or false or what the facts are because they have seen and they have been accused of the risk of censorship, whether that’s true or just perceived, but that has become a huge political problem for them, particularly in the United States, but also around the world. And also trying to, and sometimes defining subcategories of misinfo and dealing with these specifically can be a better way for platforms to develop and enforce policy. So rather than having a blanket one, you might prioritize it because health misinformation, for instance, you may have more facts and authoritative information that you can refer to on that. The same thing with elections, where, when and how to vote is something that election authorities have that is easier to point to than something that might be more amorphous or there’s disagreeing opinions about what is happening. And so sometimes you’ll see companies start to parse this out on the types of content that they’re seeing and the topic of it in order. or to try to better to figure out how to combat this and mitigate the risks that appear to them. Sorry, I haven’t had enough coffee. The risks that might happen to them. Jim, if you can go to the next slide. So a couple of strategies that we’ve seen companies do in addition to just, so most companies do not take down fake information unless again, it’s about some very, very specific topics, health and elections are two that I can think of. But other strategies that we have seen companies take, one is pre-bunking. So giving people a warning of the types of information that they might see, the types of stuff that could potentially be false, or even directing them to authoritative information on these sensitive topics. So saying that, a lot of you may have seen during COVID, platforms will put a label about where you could get more information about COVID during election season and might be going to authoritative information there. A lot of them, as I mentioned earlier, work with fact-checkers all around the world. Many work through the Pointer Fact-Checking Consortium in order to, so what that means is that the platforms aren’t making the decision, but they’re working with these fact-checkers, they’re giving them a dashboard, and those fact-checkers can decide what stories it is they wanna fact-check. They can write their piece, and then if they determine that it is false or partially false, a label will be applied to that. And then that will take the person to that fact-check. The other thing it will do too is that it reduces the reach of this content, so less people can see it, but it doesn’t fully remove it. And then as I’ve been mentioning too, there’s the labeling component of this. And so people can see whether or not what these fact-checkers are saying while they’re consuming this different types of content. And Jim, if you can go to the next slide. And a couple of notes about labels. So a lot of this work continues to be something that people are, there’s some trial and error and experimentation to it. Because as platforms. been implementing it. I know it’s really easy to just be like, just put a label on it. That’ll help people understand that it’s not fully true or what more context. But unfortunately, how people interpret that, as we’ve been seeing with research, is a lot more murkier. So for some people, when it says alter content, does that mean it was made with AI, edited with AI? Was it used to Photoshop? There’s a lot of different ways of manipulating or altering content. And not all are bad. Many people use editing software in perfectly legitimate ways. And so how do you help distinguish for a user between that versus stuff that might have been nefariously edited? We find that people have many interpretations of what labels mean. And so they may not even have, the platform might not even have the proper label or enough information to even label it. The other thing too is that when some content is labeled as false, if content is unlabeled, sometimes they will infer that that content is true, even though that may not be the case. It may be that a fact checker hasn’t gotten to it. And so what sort of things are we training users to think about in ways that were unintended? And so platforms are very much trying to experiment with different ways of helping to build up, and I know Zoe is going to go into this, the information literacy of them, because it is not as simple as just putting a label on it, because how people interpret that is very different across generations, across cultures, across many different factors. If you want to go to the next slide. The one other thing I wanted to mention, and this is something, this is a study that Google’s Jigsaw division, along with GEMIC, did earlier this year looking at how Gen Z in particular, but I do think you can pull this out more broadly, but how Gen Z approaches when they go online and how they think about information. And what this study found was that there are seven different modes that people are in when they go online. And they plotted this on this sort of axis, where on the far right, you have heavy content. This is stuff that is news, politics, weighs heavily on your mind, versus on the far left, it’s more lighthearted content. Think cats on Zumba, stuff like that. On the vertical axis, you have on the bottom, things that have social consequences, affect others. People think that they need to do something. At the top, it only affects them, so it’s not necessarily something they feel like they have to act on. So what they found is that most people are in that upper left quadrant, which is the time pass and lifestyle aspiration modes. This is where they’re just hoping to, they’re kind of at the end of the day, they’re trying to kind of emotionally equilibrium, they’re just trying to kind of, you know, zone out a little bit and relax. And when they’re in these modes, they don’t care if stuff is true or not. However, what they found is that as they were absorbing it over time, they did start to believe some of the things that they were reading and consuming. And what they also found with this is that people do still want that heavier stuff, that heavier news and information, but they want to be intentional about it. They want to know when they’re going to get it, and when they go in to get that information, they want to get it quickly, they want a summary, and then they want to get out of it. And so something about this as we continue to have this conversation over the coming years is going to be, how can we reach people where they’re at? And we also have to recognize that their feelings play a huge role in trying to combat misinformation. And as a common friend of Zoe and mine’s and many as Kate Klonick has said in a recent paper, you can’t bring logic to a feelings fight. And this is something that we’re very much trying to think through and figure out when it comes to combating misinformation. Because again, logically, we think just label it, just tell them. And what we have found is that is not actually how the human psyche works. So I can’t remember if I’ve got one more slide or if we’re going over to Zoe.

Sarah Al-Husseini: I think that’s the final slide for you, Katie. And thank you so much for that insight. And I think there are tons of approaches and strategies that can be used to safeguard users, both proactive and then reactive. And we’ll get into that. information literacy with Zoe in a second. Just for everybody in the room, if you haven’t had a chance to get a headset, they’re on the table over here. We also have the captioning behind us, so please feel free. Great to see that a few more people have joined the room. Wonderful. And so with that, Zoe, I’ll hand over for you for Google’s approach to information literacy. Great. Thanks so much, Sarah. And thanks, everybody. Jim did mention that we were going

Zoe Darma: to start with a quiz and that there are no wrong answers, but there actually are. There actually are right and wrong answers for this next quiz, so I just want you to, there are three simple questions, and I want you to basically keep track for yourself. So the first question here, and folks on the chat are free to put their answers in the chat, which one of these has not been created with AI? Is it the photo on the left or the photo on the right, A or B, which has not been created with AI? Not everybody has microphones in the audience, so maybe we’ll take a show of hands. Yeah, you can just also just keep track for yourself, because we’ll reveal at the end. And we’re getting some answers in chat and Zoom, so thank you. Great. Now the next one is, I see Jim struggling with the clicker. Now, which photo is more or less as it is described? Is it the photo on the left from WWF, the claim here seems to be about deforestation, or is it the photo on the right, which also seems to be somewhat climate related with a warship in the Danube finally being revealed because of low water levels? Which one is more or less as it is described? Great. Next one. Now, which one of these is a real product? Is it cheeseburger Oreo or is it spicy chicken wing flavor Oreo? Hopefully neither. Yeah, that was my answer Sarah hopefully neither. Okay, Jim, we can advance. So, the house on the left is a real place in Poland. The, the post about the sunken ship is is unaltered I would say and accurate. The post on the left actually from the WWF is a cropped photo. And it’s the same photo taken on the same day not from 2009 and 2019. And unfortunately I hate to say it but spicy chicken wings Oreo were was unfortunately a real product, I think it was a marketing stunt, even still, I’d be so scared to eat it. And a person, a reviewer said it was the worst part of the experience was the greasy split it left behind in my mouth, that still haunts me. So I’d love a show of hands and maybe in the, in the chat to see how many people got all three correct. Any, anybody. I’m not seeing too many hands. And don’t feel bad about yourself because next slide. We are actually pretty bad at identifying this info when presented in this way. And a group of researchers from Australia found that our accuracy is a little bit better than a coin toss. to very easily always identify what’s wrong in an image, or to identify whether an image is misleading or not. We’re also not able to identify very effectively what it is about that image that’s wrong. What’s the salient part of the image? So this group of researchers actually tracked people’s eye movements to see if they were focusing on the part of the image that had been altered. And we’re just not trained visual photo authenticity experts. And so if it’s hard for us to do this, even in a setting like this one, it’s hard to think about what Katie mentioned when folks are just in time pass mode, they’re not going always to be doing a great job at this. Next slide, please. I think also in this day and age, when there’s a lot of synthetic or generated content, we’re getting caught up perhaps in the wrong question as well. So also, as Katie mentioned, a lot of people just want us to label things, label things misinfo or not misinfo, label things generated or not generated. But is this generated does not always mean the same thing as is this trustworthy? It really depends on the context. So on the left here, you see a photo that is a real, a quote unquote real photo of trash in Hyde Park. And the claim is about that this trash was left by climate protesters. But actually this is a very common tactic and technique for misinfo. It’s just a real image taken out of context. This was actually a marijuana celebration for 420 day. And that makes a lot more sense that there would be a lot of trash left over. Now this photo on the right is just something that I created. I said, create me. and art of Hyde Park, London with trash. And so, it really depends not only how something was created, but how it’s being used and in the context it’s being used with the caption and label and everything like that. Next slide, please. So, we’ll still need your plain old vanilla information literacy tools. These will still need to evolve given that there is more generated content. There is more synthetic content out there. Certainly, our tools need to evolve. But there’s not going to be a magical technical silver bullet for generated content, just like there’s not a magic silver bullet for mis- and disinformation overall. And so, the way that we’re thinking about these things at Google is inferred provenance or inferred context over here. This is like your classic information literacy techniques, training users to think about when did the image or claim first appear? Where did it come from? Who’s behind it? And what’s the claim they’re making? And what do other sources say about that same claim? And then, tools on the right, which are assertive provenance tools. These can either be user visible or not. They’re explicit disclosures for AI-generated content like watermarking, fingerprinting, markup metadata, and labels. Next slide, please, Jim. Thank you. So, we have set this out in a new white paper. You can scan to read it here. Or if you give your email to Sarah, I can connect with you, and we’ll make sure that we send you a copy. But this white paper kind of sets out how we’re thinking about both inferred context and assertive provenance, and what these two things, how they both play a role in meeting the current moment. around generated content, trustworthiness, and misinformation. Next slide, please. Now, what Katie talked about are a bunch of tools that are happening across many different platforms. I’m going to focus on some of the tools and features that we brought in directly to Google Search. So first, we have about this result. Next to any blue link or web result on Google Search, there are a set of three dots. And if you click into those three dots, you can get this tool, which is designed to encourage easier information literacy practices like lateral reading, or basically doing more research on a given topic. And so, this will tell you what the source says about itself, what other people says about a source, and what other people are saying about the same topic that you search for. So let’s say there was a piece of misinfo about the King of Bahrain having a robot bodyguard. So when you click on the three dots next to that, you’ll see not only information about the source, but also web results about that topic. Spoiler alert, the King of Bahrain did not have a robot bodyguard, just in case you were wondering. Next slide, please. This is just another layer into about this result. It brings all of this information into one page, and it helps users carry out the CORE and SIFT methods. SIFT stands for Stop, Investigate the Source, Find Other Sources, and Trace the Claim. That’s really hard to expect people to do when they’re in time pass mode, so we wanted to put a tool directly into search. just to make this as easy as possible for folks. Because one of the criticisms of inferred provenance or inferred context is it puts a lot of responsibility onto the user. When we’re thinking about all those other modes, let’s say where it might be more important for users, let’s say making a big decision, like a financial decision, for example. We want to make sure that users have the tools that they need when they really feel motivated to go that extra mile. We’ve also built a similar feature into image results. You can also, in the image viewer, click on a result, you’ll see three dots. This is like a supercharged reverse image search directly in the image viewer. It will tell you an image’s history when we, Google, have first indexed an image. Because sometimes an old image, again, will be taken out of context and go viral for a completely different reason. It’ll also show you an image’s metadata. That brings us, next slide, to assertive provenance. For Google’s consumer AI products like Gemini, for example, that power Gemini or Vertex AI in Cloud, we are providing a watermark, a durable watermark for content. Oftentimes, that’s not user visible. In about this image, you can see here if that Synth ID watermark is present, we’ll provide this generated with Google AI directly into the image viewer so that you can see it was produced by one of our image generation products. Now, the reason that it’s hard to just do this as Google for every image out there in the universe, is for the reason that Katie mentioned earlier. Let’s take the example of Russian-backed Macedonian teens. They’re probably not using tools that are using watermarking. If they’re running a derivative of an open model, for example, there’s no way to force those other providers to watermark their content. There’s no motivation for the content creator in that example to use a watermark or a label. Until we have, and we’re never going to have 100 percent accurate AI detectors that are able to suck out all of the information on the Internet, send it through an AI detector and spit out a label or a watermark that’s accurate 100 percent of the time. So really, we need a holistic approach that involves inferred and assertive provenance and a whole of a society solution. Next slide, please. The last thing I’ll say is that there is a lot of talk about the role of recommendations and algorithms and how they’re designed, and whether that is what is creating or promoting or giving more reach to this misinformation that is sticky and fast, as Katie mentioned. But a recent study, at least looking at search, and this is looking at Bing actually, shows that there is consistent evidence that user preferences play an important role in both exposure to and engagement with unreliable sites from search. So what does this mean? Searchers are coming across misinformation, when there is high user intent for them to find it. That means they are searching explicitly for unreliable sources. And so it’s not Taylor Swift that’s bringing misinfo about Taylor Swift, but it’s when you’re searching for Taylor Swift plus the site that you like to go to. Now, that site may not be reliable. That might not be a reliable news site. But that is really when folks are most likely to encounter misinformation in the wild on a search engine. And so really, we have to focus on individual choices with these so-called navigational queries, because that’s what’s driving engagement. And it really has to do with what users are actively seeking out. And that’s a bit of an uncomfortable conversation, because it goes to a question of, like, how do you get users on a more nutritional or healthy media diet, rather than, like, how do we just label something or how do we just backtack something? And that’s a much harder problem to solve. So I’ll stop there and turn back to Sarah.

Sarah Al-Husseini: Thank you very much, Zoe. And it’s great to see the Google tools that are helping empower people to make the best decisions and really find the best information for their decision-making and consumption. So thank you. With that, I’m going to turn over to the Q&A portion of the session. Oh, maybe Jim first. I have one online. Oh, great. Fantastic.

Zoe Darma: Wonderful. I see the question. So I’ll read this question out. Does Google’s watermarking for AI-generated images, like those created with Imogen, rely on metadata? If so, can it be removed by clearing the metadata, or is it embedded directly into the image itself? And, yes, that slide that y’all were on just now with the chart might be a helpful one. to go to, but essentially the answer is no. It doesn’t rely on metadata. It does produce metadata that shows that the image has been created with a Google generative AI product. It is a tamper resistant, I will say Adderall. So not tamper, like nothing is 100% tamper proof. However, SynthID is tamper resistant. So it is hard to remove from that image and doing something like clearing the images metadata is not going to remove the watermark. Now, this is a little bit different from other types of provenance solutions in the past. Some other types of metadata are easy to edit using very common image editing software. So IPTC metadata, for example, you can edit, and it was not designed to be a provenance tool the way that we’re thinking about it now, but there are ongoing conversations happening both with C2PA and IPTC about how durable that metadata should be. Where we have metadata from SynthID or from IPTC, for example, we are including it in the image viewer the way that I just showed you in about this image. Thank you for the question.

Sarah Al-Husseini: Thanks so much. And maybe Katie, back over to you. So misinformation and disinformation, you mentioned are very sticky problems with big impacts on society. So there’s a lot of external pressure for platforms to do something about these issues, but also a lot of concern about platforms overreaching, potential freedom of expression issues, and so on. Can you talk about how platforms think about these challenges of misinformation and what kind of tools and approaches they have to address them?

Katie Harbeth: Yeah, absolutely. And I think this is why you’ve seen a variety of approaches. for platforms to try. I think one thing in particular is a question about people’s right to say something versus the right for that to be amplified. And so you oftentimes are seeing platforms where they’re again, not taking it down but they are adding the labeling, they are trying to reduce the reach that also has brought criticism on them as well. And sort of a question around the principles of that. I think this pre-bunking is really important as well in trying to just give people other information and context that they can see and reach and be able to understand when they’re consuming this content. You’re starting to see new approaches from places like Blue Sky, which are more decentralized, where they are instead, not only them labeling content but anybody can label content. And then the user can decide what sort of content they do or do not want to see in their feed. And so it is much more putting the power back in the hands of the users versus the platform themselves making some of those decisions. I think a lot of this will continue to evolve and change too as AI continues to play a big role in how we summarize what sort of information that we get. And people are also thinking about that and what types of information are pulled into it. But this is sort of an ever evolving thing as the pressure on them from, as you mentioned, people who are saying that they should do more but others who are saying they’re taking down too much around this. And at the moment you are seeing more platforms taking less of a strong approach around, again, the leave it up or take it down and instead trying to find some of these other ways. Another one I should just mention is X slash Twitter. This is before Musk took over, but it still exists. They have community notes. And so they have larger number of people that can help to add a community note to something to give it more context, to say if something might be true or partially true. And they have some really great mechanisms in place to make sure that that cannot be gained as part of. of that. But I think we’ll continue to see a lot of experimentation on this as they try to balance that, you know, the freedom of expression versus also the safety of people who are using these

Sarah Al-Husseini: platforms. Fantastic. Thank you for that, Katie. That was really helpful. And maybe shifting gears a little bit because I see a few questions coming up on on AI in the chat. Maybe Zoe will tee up. Can you talk about how the proliferation of AI generated content changes the conversation around information literacy? And then we’ll go in and take a viral question in the chat.

Zoe Darma: Yeah, I don’t I think it’s like evolutional, not revolutional. I mean, it’s, it’s more in terms of the volume of content that people are seeing. But edited content is the an age old problem. So the the very first photographic hoax was like of a ghost. And it was using like a daguerreotype, for example. And so as long as images have created, there has been this issue of what do we do about if it’s been edited, if it’s been altered. And, and that’s why I’m a strong believer that our, our, our information literacy muscle really needs to grow as a society because whether something is generated or not, doesn’t necessarily always change the question, is this trustworthy or not? And that’s the key question that we have to remind people. Is this trustworthy generation, whether it’s generated or not, it’s like one element. So and it really depends on the context. And so I think our information, what needs to change is we need to ask, yes, is this generated or not, but still ask all of those other questions that we’ve always been asking ourselves when we’re encountering content that could be potentially suspect. Hope that answers your question, Sarah. Yeah, great. Thank you. And I know Zoe,

Sarah Al-Husseini: you answered directly in the chat, but maybe for those in the room, does Google’s watermarking for AI generated images like those created with image in rely on metadata? If so, can it be removed by clearing the metadata, or is it embedded directly into the image itself?

Zoe Darma: Yeah, so it provides metadata, but that metadata cannot be stripped easily. I say easily because nothing is 100% tamper-proof, but SynthID is very tamper-resistant, so it’s not editable the way that other metadata is. And I think that’s a critical piece of what we as Google are doing. Again, neither Google, nor OpenAI, nor Anthropic, nor Meta, none of us control all of the generation tools that are out there. And it’s very difficult to make folks who are using a derivative of open source, maybe smaller models, for example, models that are being run by other companies. There’s not a way for us to force other companies to watermark. And so this is where it becomes really difficult, because we’ll never have 100% coverage on the open web, even if the biggest players are all in C2PA and all responsibly watermarking and or labeling where appropriate. There’s always going to be some proportion of content that’s generated out there on the open web that does not include a watermark, for example.

Sarah Al-Husseini: Fantastic. Thank you, Zoe and Iberal for those questions and answers. And maybe to a question

Audience: in the room. Yeah, thanks so much. This is Lina from Search for Common Ground and the Council on Tech and Social Cohesion. So it’s powerful. You said it again, Zoe, that you can’t actually force others to do certain things, which then in some ways pokes holes in your valiant efforts. So there is a growing evidence about really harmful tech facilitated gender-based violence. And I’m just curious, you know, are we seeing attention on this growing because we do hear that there’s specific things you’ve put in place for health and elections. And a lot of that’s because of the excellent work of the two of you, right? So what would it take for us to also begin to think differently about the harms around tech, TFGBV? Do we need to rally other companies so that there is that standardization of the watermarking of that kind of harmful content? Just where do you think the conversation’s at right now?

Zoe Darma: Thanks. That’s a fantastic question. And image-based sexual abuse, I’ll just say, in terms of like, quote, unquote, deepfake pornography, that’s not what we call it internally. We call it involuntary synthetic pornographic imagery, is a great example of a problem that Google didn’t necessarily create, right? We are not allowing our models to be used for image generation of deepfake pornography or ISPI. However, it is an issue that we’re deeply grappling with, especially on the product that I work on, often like Google Search, because a lot of that material is out there now on the open web. So what we’ve done, and I can only speak for ourselves, is we’ve taken an approach that relies both on technical and kind of multi-stakeholder and industry solutions. So one of the things that we’ve done is we have implemented new ranking protections and ranking solutions so that we can better recognize that type of content, and not necessarily always like recognizing, is this AI generated or not? There are other signals that we can use as well. So for example, if the page itself is advertising it as like deepfake, celebrity pornography, for example, we can detect that and we’re applying ranking treatments so that that’s not ranking highly or well. We also have long had a content policy to remove that type of imagery as well. The other thing that we’re doing is we’re providing more automated tools to victim survivors. When you report even just regular non-synthetic but just regular NCEI to us, there are a couple things we do on the back end. One is if that image is found to be violative, we not only remove the image, we also dedupe using hashing technology. Now, hashing can be evaded with some alterations to the image itself. We also give an option for reporting users to check a box to say that they want explicit images, the best of our ability, removed for queries about them. If the query that I’m reporting, for example, is Zoe Darmay leaked nudes, I can check a box saying, I also want explicit imagery filtered for results about my name. Zoe Darmay leaked nudes, Zoe Darmay, et cetera. That’s another way we’re addressing the problem through automation that doesn’t necessarily rely on finding all of the generated imagery or not, but attacks the problem through another dimension. Those are a couple of the ways. I’ll throw in the chat our most recent blog post on involuntary synthetic pornographic imagery and all of the ranking protections that we’re applying to demote such content and also raise up authoritative content on queries like, for example, deepfake pornography where we’re really trying to return authoritative trusted sources about that. a particular topic and an issue rather than problematic sites that are promoting such content.

Sarah Al-Husseini: Fantastic. Thank you. And I think we have another question in the room before bouncing back to online.

Audience: Hello. Hi. Okay. Katie and Zoe, thank you for the presentations and wonderful answers as well. My name is Ian. I’m from the Brazilian Association of Internet Service Providers. I have more of a doubt than a question. Do we have any studies already showing the effectiveness of the literacy on actually identifying and combating misinformation? I mean, does it have an actual impact or how much can we measure it already?

Katie Harbeth: I think. Yeah, I was gonna say but isn’t there a jigsaw one so we, I feel like you all have done quite a bit of research on this but I think there’s been some very preliminary stuff on pre bunking, where I first saw it really be effective and folks starting to realize its effectiveness was particularly when Russia invaded Ukraine, and sort of that pre bunking ahead of that actually, that actually happening but I’ll toss it to Zoe I don’t want to take Google’s thunder for some of the great research that they’ve done on this too.

Zoe Darma: Oh, and it wasn’t my research either so big shout out to our colleague Beth Goldberg who’s not here but has led a lot of our pre bunking work. Yeah, we can try to dig that up and throw it in the chat. In terms of, so Katie covered pre bunking, I’ll just cover kind of information literacy so there is a lot of evidence that the SIFT and CORE methods work. And so we searched for evidence based practices that we could make easier in the product itself. So, first, the first way I’ll answer your question is, yes, these are evidence-based practices. SIFT, for example, was developed by Mike Caulfield, who is a misinformation researcher who was most recently with the University of Washington, and he’s since moved on, so I don’t know his affiliation right now. CORE was by Sam Weinberg, another misinformation and information literacy researcher. When we’ve done user research on, about this result, for example, we’ve actually seen folk theories decrease. So, I’ll say that the user research that we’ve done, I’ll caveat this by saying small sample size, more research needs to be done, but internal indications for us are that consistent use of about this result, for example, reduced folk theories in terms of how somebody was being shown a certain result. So, for us, that was a really positive indicator that they had a better understanding of not only the results they were seeing, but how those results were chosen for them. And what I’ll say is that a lot of people think, for example, that they have a lot of folk theories about why we’re showing certain results, you know, are we snooping in your Gmail and then giving you results based on things that you’re emailing, blah, blah, blah, you know, all those types of folk theories. And when you actually just say, it really has to do with the keywords that you’re putting in the search box, people understand that that’s why they’re seeing those types of results. A lot of folks think that the results are so relevant to them, they must be, we must know something about them, when oftentimes we’re just using what’s in the, what the user puts in the box. People are unfortunately not as unique as they think that they are. And so, we know a lot about what people want when they’re searching. for, gosh, Katie and I were talking about beach umbrellas yesterday. So people searching for best beach umbrellas, we know a lot about them, and are serving great relevant results based on that, and people think, oh, this is exactly what I need, it must have to do with something about me. The other thing I’ll just say, which is a new feature that you can check for yourself, is we’re rolling out a new feature at the footer of the search results page that will say these results are personalized, these results are not personalized, and if they are personalized, you can try without personalization. I would encourage everybody to check that out, because a lot of the search results pages you’ll find are not personalized. A great many of them are not. And the ones that are right now are on things like what to watch on Netflix, for example. And so then you can see it just goes right at the bottom of your page. You can even click out a

Sarah Al-Husseini: personalization to see how those results would change, and you can check that you’re not in, like, an echo chamber filter bubble. Thank you for the great question and answers from both our panelists. I’m being told we have five more minutes left, and so maybe one quick one from the room in follow-up, and I think this one is probably aimed at Zoe. What if a screenshot is taken then would there be any way of tracking it in the context of watermarks? Can they be

Zoe Darma: removed easily? Yeah, that’s a great question, and for SynthID, I’m sorry, SynthID, I hate to do this weird Google thing where we’re 200,000 people and it’s a different product area that created it. And for screenshots and SynthID, I don’t know the answer directly, but I will certainly follow up and put it in the chat while Sarah’s wrapping up. But I’ll say generally, yes, taking a screenshot is one way to strip metadata off an image. for example. So that was like the classic example of evasion for certain other kind of metadata techniques that we talked about. There are other ways to evade. We talked about evasion of hashing, for example, which can be done by adding a watermark or slightly modifying the image in some ways. There are always ways to get around technical tools with really motivated actors who want to do that. And so we have made it as difficult as possible to strip that metadata. But that’s why we’re saying in a presentation like this, we cannot always rely on 100% technical solutions. We have to think about these other ecosystem solutions as well. And that’s why I come to these presentations and always talk also about inferred provenance and inferred context. However, I will say that we’ve made it tamper resistant. So again, you can’t go into photo editing software and remove it, for example, and things like that. But I’ll get you an answer. It’s a good one. It’s a good question about screenshotting in particular.

Sarah Al-Husseini: Fantastic. Thank you so much, Zoe. Any other questions from the room? And if not, maybe I’ll wrap up with one quick question. Yep. Fantastic. And Katie, I’d love to start with you. So just given the platform of IGF, we’re all here this week in Riyadh for the event. What are some of the challenges of implementing a multi-stakeholder approach to information literacy? How can these challenges be overcome, especially at a platform like IGF?

Katie Harbeth: Yeah, I think this work is absolutely multi-stakeholder and needs to be done from multiple different approaches. It’s not just enough to ask the platforms in which to do this. And I think Taiwan is frankly a really great example of how you see in a multi-stakeholder approach to mis- and disinformation in their country. I think one of the biggest challenges. that I’ve seen that I continue to want to work on is trying to help those that have not been inside of a company sort of understand the scale and the operational challenges to some of the solutions and better thinking and brainstorming about how we might do that. And then on the platform side, helping them to understand sort of the approaches that civil society and others that they are finding when they’re trying to combat all of this in their countries and in their regions around all of this. So I think continued cross collaboration is really important. And then the other thing too, is that this does need to continue to be experimental because if there were a silver bullet to this, we would have all figured this out a long time ago, but this is a really hard and tricky problem. And I think having open dialogue and conversations like this will continue to be important, particularly again, as we go into this new era of AI, which is very much going to change how we generate and consume information, that now is really the time to be thinking about how we shape what those models and things are gonna look like for the next, at least five to 10 years.

Sarah Al-Husseini: Fantastic, thank you so much, Katie and Zoe.

Zoe Darma: The same question. Yeah, I just, before I answer your question, I actually just wanted to go back to the watermarking question because I found the answers through the best product ever, Google search, just a quick search. So SynthID uses two neural networks. One takes the original image, produces another image almost identical to it, but embeds a pattern that is invisible to the human eye. That’s the watermark. The second neural network can spot that pattern and tell users whether it detects that watermark and suspects the image has one or finds that it doesn’t have a watermark. So SynthID is designed in a way that means that watermark can still be. detected, even if the image is screenshotted or edited in another way, like rotating or resizing it. So I was looking that up. Can you repeat your your final wrap up question to me, Sarah? Sorry, of course, and thank you for taking the time. I think we’re a little biased towards Google search being the best, which I absolutely love. We drink the Kool-Aid for sure. And the question is, what are some of the challenges in implementing a multi stakeholder approach to information literacy? And how can the challenges be overcome? Yeah, I think one of the biggest challenges is just the idea that it takes a lot of time and it’s too much to expect of users. So I really think what the multi stakeholder approaches, even though they’re called multi stakeholder, they’re often really focused on governments, what governments can do, and what tech companies can do. And I think one of the things that you’ve heard us consistently throughout this really fascinating talk, and thank you so much, Sarah, for a great job facilitating it, is the third leg of this stool is what users contribute, what users do, what users seek out, and what users are consuming, and how they’re consuming it. And so I think that’s the biggest challenge, like one of the studies I mentioned earlier, really focused on how much users expressed preferences play into the fact that they are finding or not finding this information, like our users actively seeking out unreliable sources. That’s a hard problem to solve. And there’s a reason that multi stakeholder approaches really want to focus on governments or technology companies, they’re at the table, we’re the ones who are doing the talking. But we really are missing a huge piece of the puzzle if we’re not talking about user expressed preferences, what they want to find what they’re seeking out, and then how they’re consuming that, and how we can get them to be stronger. and reliable consumers and creators in the information ecosystem. And that’s a tough, that’s a tall order.

Sarah Al-Husseini: Thank you for that Zoe. I think we’re at time. So maybe just to wrap up a big, huge thank you to our panelists, to Jim Zena, our clicker, stepping in with the internet connection. And just to say, I think it’s from the conversation today, very apparent that we need a holistic approach to the shared responsibility of information literacy and protecting our users and education with regards to our stakeholders and government users, especially youth, because somebody who works in one of the largest youth populations in the world, this is something that sometimes gets overlooked, but is really important. Start them young and bringing in civil society to the conversation is always really important. So maybe I’ll hand back over to Jim.

Jim Prendergast: Yeah, no, thanks everybody. Especially thanks for the great questions, both in person and online. That’s what we really look forward to is the interaction with everybody instead of talking amongst ourselves. So appreciate it. Everybody have a good evening and we’ll see you back here tomorrow. Can the speaker stay online for two seconds for a quick photo? Great. Thanks everyone. Thank you.

K

Katie Harbeth

Speech speed

185 words per minute

Speech length

2677 words

Speech time

867 seconds

Misinformation is sticky and spreads quickly

Explanation

Katie Harbeth explains that misinformation can spread rapidly across platforms and is difficult to counteract once it has taken hold. This stickiness makes it challenging for platforms to effectively combat false information.

Evidence

No specific evidence provided in the transcript.

Major Discussion Point

Challenges of Misinformation

Agreed with

Zoe Darma

Agreed on

Misinformation is a complex and challenging problem

People interpret labels and warnings differently

Explanation

Harbeth points out that users may have varying interpretations of content labels and warnings. This variability makes it difficult for platforms to effectively communicate the reliability or potential issues with certain content.

Evidence

Example given of how ‘altered content’ label could be interpreted as AI-generated, Photoshopped, or other forms of manipulation.

Major Discussion Point

Challenges of Misinformation

Differed with

Zoe Darma

Differed on

Effectiveness of labeling and warnings

Users’ emotional state affects how they consume information

Explanation

Harbeth discusses how users’ emotional states and intentions when using platforms influence their information consumption. She notes that people in different modes (e.g., relaxation vs. intentional information seeking) interact with content differently.

Evidence

Reference to a study by Google’s Jigsaw division and GEMIC on how Gen Z approaches online information.

Major Discussion Point

Challenges of Misinformation

Agreed with

Zoe Darma

Agreed on

User behavior and preferences play a significant role

Platforms struggle to balance free expression and content moderation

Explanation

Harbeth highlights the challenge platforms face in moderating content while respecting freedom of expression. She notes that platforms are often criticized for both over-censorship and under-moderation.

Major Discussion Point

Challenges of Misinformation

Platforms use fact-checking, labeling, and reducing reach of suspect content

Explanation

Harbeth outlines various strategies platforms employ to combat misinformation. These include partnering with fact-checkers, applying labels to questionable content, and reducing the visibility of potentially false information.

Evidence

Mention of platforms working with fact-checkers through the Pointer Fact-Checking Consortium.

Major Discussion Point

Approaches to Combating Misinformation

Pre-bunking can be an effective strategy

Explanation

Harbeth suggests that pre-bunking, or providing users with warnings about potential misinformation before they encounter it, can be an effective approach. This strategy aims to prepare users to critically evaluate information they may come across.

Evidence

Reference to pre-bunking being effective during Russia’s invasion of Ukraine.

Major Discussion Point

Approaches to Combating Misinformation

Cross-industry collaboration is needed as AI evolves

Explanation

Harbeth emphasizes the importance of collaboration across industries to address the challenges posed by evolving AI technologies. She suggests that as AI changes how information is generated and consumed, stakeholders need to work together to shape future models and approaches.

Major Discussion Point

Evolving Landscape of Information and AI

Agreed with

Zoe Darma

Agreed on

Multi-stakeholder approaches are necessary

Z

Zoe Darma

Speech speed

142 words per minute

Speech length

4387 words

Speech time

1850 seconds

Accuracy in identifying misinformation is only slightly better than chance

Explanation

Darma cites research showing that people’s ability to identify misinformation, particularly in images, is not much better than random guessing. This highlights the difficulty individuals face in distinguishing between authentic and manipulated content.

Evidence

Reference to a study by Australian researchers finding that accuracy in identifying misinformation is only slightly better than a coin toss.

Major Discussion Point

Challenges of Misinformation

Agreed with

Katie Harbeth

Agreed on

Misinformation is a complex and challenging problem

User preferences play a role in exposure to unreliable sources

Explanation

Darma discusses how users’ own search behaviors and preferences contribute to their exposure to unreliable information. She notes that people often actively seek out specific unreliable sources, rather than stumbling upon them randomly.

Evidence

Reference to a study on Bing search results showing that user preferences play an important role in exposure to and engagement with unreliable sites.

Major Discussion Point

Challenges of Misinformation

Agreed with

Katie Harbeth

Agreed on

User behavior and preferences play a significant role

Google implements tools like “About this result” to encourage information literacy

Explanation

Darma explains Google’s approach to promoting information literacy through features like “About this result”. This tool provides users with context about search results, encouraging critical evaluation of sources.

Evidence

Detailed description of the “About this result” feature in Google Search, including its functionality and purpose.

Major Discussion Point

Approaches to Combating Misinformation

Differed with

Katie Harbeth

Differed on

Effectiveness of labeling and warnings

Watermarking and metadata for AI-generated content can help, but are not foolproof

Explanation

Darma discusses Google’s use of watermarking and metadata for AI-generated content as a means of identification. However, she notes that these methods are not perfect solutions, as they can potentially be circumvented or may not be universally adopted.

Evidence

Description of Google’s SynthID watermarking technology and its resistance to tampering.

Major Discussion Point

Approaches to Combating Misinformation

Multi-stakeholder approaches are needed, including user education

Explanation

Darma emphasizes the importance of involving multiple stakeholders in addressing misinformation, including users themselves. She argues that user education and improving information literacy are crucial components of any comprehensive strategy.

Major Discussion Point

Approaches to Combating Misinformation

Agreed with

Katie Harbeth

Agreed on

Multi-stakeholder approaches are necessary

Information literacy skills remain crucial even with new AI tools

Explanation

Darma stresses that traditional information literacy skills are still important, even as new AI tools emerge. She argues that these skills need to evolve to address the challenges posed by AI-generated content, but remain fundamental to navigating the information landscape.

Evidence

Reference to SIFT and CORE methods as evidence-based practices for information literacy.

Major Discussion Point

Approaches to Combating Misinformation

AI-generated content adds new challenges to information literacy

Explanation

Darma discusses how the proliferation of AI-generated content creates additional complexities for information literacy. She notes that distinguishing between human-created and AI-generated content is becoming increasingly difficult.

Major Discussion Point

Evolving Landscape of Information and AI

Distinguishing between generated and trustworthy content is complex

Explanation

Darma explains that the trustworthiness of content is not solely determined by whether it is AI-generated or not. She argues that context and use of the content are crucial factors in assessing its reliability.

Evidence

Example of a real photo taken out of context versus an AI-generated image used appropriately.

Major Discussion Point

Evolving Landscape of Information and AI

Platforms are developing new tools to address AI-generated misinformation

Explanation

Darma outlines Google’s efforts to develop tools for identifying and managing AI-generated content. She discusses both technical solutions like watermarking and user-facing features to promote critical evaluation of information.

Evidence

Description of Google’s SynthID watermarking technology and the “About this result” feature.

Major Discussion Point

Evolving Landscape of Information and AI

Google is implementing measures to combat involuntary synthetic pornographic imagery

Explanation

Darma discusses Google’s approach to addressing the issue of involuntary synthetic pornographic imagery (ISPI). She outlines various technical and policy measures implemented to detect, demote, and remove such content from search results.

Evidence

Mention of ranking protections, automated tools for victim-survivors, and content removal policies.

Major Discussion Point

Addressing Specific Misinformation Concerns

S

Sarah Al-Husseini

Speech speed

193 words per minute

Speech length

932 words

Speech time

289 seconds

Platforms are developing targeted approaches for health and election misinformation

Explanation

Al-Husseini notes that platforms are creating specific strategies to combat misinformation in critical areas such as health and elections. This suggests a recognition of the particular importance and potential impact of misinformation in these domains.

Major Discussion Point

Addressing Specific Misinformation Concerns

A

Audience

Speech speed

137 words per minute

Speech length

233 words

Speech time

101 seconds

Tech-facilitated gender-based violence is an emerging concern

Explanation

An audience member raises the issue of technology-facilitated gender-based violence as a growing problem. This highlights the need for platforms to address specific forms of harmful content and behavior beyond general misinformation.

Major Discussion Point

Addressing Specific Misinformation Concerns

Agreements

Agreement Points

Misinformation is a complex and challenging problem

Katie Harbeth

Zoe Darma

Misinformation is sticky and spreads quickly

Accuracy in identifying misinformation is only slightly better than chance

Both speakers emphasize the difficulty in combating misinformation due to its rapid spread and the challenges users face in identifying it accurately.

Multi-stakeholder approaches are necessary

Katie Harbeth

Zoe Darma

Cross-industry collaboration is needed as AI evolves

Multi-stakeholder approaches are needed, including user education

Both speakers stress the importance of collaboration across industries and involving multiple stakeholders, including users, in addressing misinformation.

User behavior and preferences play a significant role

Katie Harbeth

Zoe Darma

Users’ emotional state affects how they consume information

User preferences play a role in exposure to unreliable sources

Both speakers highlight the importance of user behavior and preferences in the spread and consumption of misinformation.

Similar Viewpoints

Both speakers discuss the implementation of various tools and strategies by platforms to combat misinformation and promote information literacy.

Katie Harbeth

Zoe Darma

Platforms use fact-checking, labeling, and reducing reach of suspect content

Google implements tools like “About this result” to encourage information literacy

Both speakers acknowledge the complexity of content evaluation and the challenges in effectively communicating content reliability to users.

Katie Harbeth

Zoe Darma

People interpret labels and warnings differently

Distinguishing between generated and trustworthy content is complex

Unexpected Consensus

Importance of traditional information literacy skills

Katie Harbeth

Zoe Darma

Pre-bunking can be an effective strategy

Information literacy skills remain crucial even with new AI tools

Despite discussing advanced technological solutions, both speakers unexpectedly emphasize the continued importance of traditional information literacy skills and strategies like pre-bunking.

Overall Assessment

Summary

The speakers largely agree on the complexity of misinformation, the need for multi-stakeholder approaches, the importance of user behavior and education, and the ongoing relevance of traditional information literacy skills alongside new technological solutions.

Consensus level

High level of consensus on the main challenges and general approaches to combating misinformation. This agreement suggests a shared understanding of the problem and potential solutions, which could facilitate more coordinated efforts across platforms and stakeholders in addressing misinformation.

Differences

Different Viewpoints

Effectiveness of labeling and warnings

Katie Harbeth

Zoe Darma

People interpret labels and warnings differently

Google implements tools like “About this result” to encourage information literacy

While Harbeth emphasizes the challenges of user interpretation of labels, Darma presents Google’s approach as a potential solution, highlighting a difference in perspective on the effectiveness of such tools.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the effectiveness of specific strategies to combat misinformation, such as labeling and user education.

difference_level

The level of disagreement among the speakers is relatively low. They largely agree on the challenges posed by misinformation and the need for multi-faceted approaches. The differences are primarily in emphasis and specific strategies, rather than fundamental disagreements. This suggests a general consensus on the importance of addressing misinformation, which could facilitate collaborative efforts in developing comprehensive solutions.

Partial Agreements

Partial Agreements

Both speakers agree on the need for technical solutions to combat misinformation, but differ in their emphasis on specific approaches. Harbeth focuses on fact-checking and labeling, while Darma highlights watermarking and metadata for AI-generated content.

Katie Harbeth

Zoe Darma

Platforms use fact-checking, labeling, and reducing reach of suspect content

Watermarking and metadata for AI-generated content can help, but are not foolproof

Both speakers agree on the need for collaborative approaches, but Darma places more emphasis on user education as a crucial component.

Katie Harbeth

Zoe Darma

Cross-industry collaboration is needed as AI evolves

Multi-stakeholder approaches are needed, including user education

Similar Viewpoints

Both speakers discuss the implementation of various tools and strategies by platforms to combat misinformation and promote information literacy.

Katie Harbeth

Zoe Darma

Platforms use fact-checking, labeling, and reducing reach of suspect content

Google implements tools like “About this result” to encourage information literacy

Both speakers acknowledge the complexity of content evaluation and the challenges in effectively communicating content reliability to users.

Katie Harbeth

Zoe Darma

People interpret labels and warnings differently

Distinguishing between generated and trustworthy content is complex

Takeaways

Key Takeaways

Misinformation is a complex, evolving challenge that requires multi-stakeholder approaches

Technical solutions alone are insufficient; user education and information literacy remain crucial

Platforms are developing new tools and strategies to combat misinformation, including AI-generated content

User preferences and behaviors play a significant role in exposure to misinformation

Balancing free expression with content moderation remains an ongoing challenge for platforms

Resolutions and Action Items

Google to continue developing and improving tools like ‘About this result’ and SynthID watermarking

Platforms to explore pre-bunking as an effective strategy against misinformation

Stakeholders to focus on user education and improving information literacy skills

Unresolved Issues

How to effectively address user preferences for unreliable sources

Balancing platform intervention with concerns about censorship and free expression

Addressing misinformation from sources not using responsible AI practices or watermarking

Measuring long-term effectiveness of information literacy efforts

Suggested Compromises

Platforms focusing on reducing reach and amplification of misinformation rather than outright removal

Implementing user-choice features like community notes or decentralized labeling systems

Balancing automated detection with human review and fact-checking partnerships

Thought Provoking Comments

Misinformation is sticky and fast, which means that it can spread very, very quickly and it’s something that very much sticks with people and it can be very hard to change their minds.

speaker

Katie Harbeth

reason

This comment succinctly captures a key challenge in combating misinformation – its rapid spread and persistence in people’s minds.

impact

It set the stage for discussing the complexities of addressing misinformation and why simple solutions like labeling may not be sufficient.

You can’t bring logic to a feelings fight.

speaker

Katie Harbeth (quoting Kate Klonick)

reason

This pithy statement encapsulates a crucial insight about the emotional nature of misinformation and why purely factual approaches often fail.

impact

It shifted the conversation towards considering the psychological aspects of misinformation and how to address them.

We are actually pretty bad at identifying this info when presented in this way. And a group of researchers from Australia found that our accuracy is a little bit better than a coin toss.

speaker

Zoe Darma

reason

This comment, backed by research, challenges assumptions about people’s ability to identify misinformation and manipulated content.

impact

It highlighted the need for better tools and education to help people identify misinformation, leading to discussion of Google’s approaches.

Is this generated does not always mean the same thing as is this trustworthy? It really depends on the context.

speaker

Zoe Darma

reason

This insight shifts the focus from simply identifying AI-generated content to evaluating its trustworthiness in context.

impact

It broadened the discussion beyond technical solutions to include the importance of context and critical thinking skills.

Searchers are coming across misinformation, when there is high user intent for them to find it. That means they are searching explicitly for unreliable sources.

speaker

Zoe Darma

reason

This comment reveals a counterintuitive finding about how users encounter misinformation, challenging assumptions about algorithmic responsibility.

impact

It introduced the idea that user behavior and preferences play a significant role in misinformation exposure, shifting the conversation towards individual responsibility and media literacy.

Overall Assessment

These key comments shaped the discussion by highlighting the complex, multifaceted nature of the misinformation problem. They moved the conversation beyond simplistic technical solutions to consider psychological factors, user behavior, and the importance of context. The discussion evolved from focusing solely on platform responsibilities to emphasizing the need for a holistic approach involving user education, critical thinking skills, and understanding the limitations of both human perception and technical solutions in identifying and combating misinformation.

Follow-up Questions

How effective are information literacy efforts in actually identifying and combating misinformation?

speaker

Ian from the Brazilian Association of Internet Service Providers

explanation

Understanding the measurable impact of information literacy initiatives is crucial for evaluating and improving strategies to combat misinformation.

How can SynthID watermarking be affected by screenshots?

speaker

Audience member (unnamed)

explanation

This explores potential limitations of watermarking technology, which is important for understanding its effectiveness in identifying AI-generated content.

What would it take to begin thinking differently about the harms around tech-facilitated gender-based violence?

speaker

Lina from Search for Common Ground and the Council on Tech and Social Cohesion

explanation

This highlights the need to address specific types of harmful content and suggests exploring standardization of watermarking across companies for such content.

How can we better incorporate user preferences and behaviors into multi-stakeholder approaches to information literacy?

speaker

Zoe Darma

explanation

This area of research emphasizes the importance of understanding and addressing user behavior in consuming and seeking out information, which is often overlooked in current approaches.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #26 High-level review of AI governance from Inter-governmental P

Open Forum #26 High-level review of AI governance from Inter-governmental P

Session at a Glance

Summary

This discussion focused on global AI governance from various perspectives, including government, industry, civil society, and youth. Participants explored the current status, challenges, and priorities in AI governance across different regions.

Key themes included balancing innovation with security and equality, addressing the needs of emerging economies, and ensuring cultural and linguistic diversity in AI development. The importance of open-source AI and its implications for economic development and flexibility were highlighted. Challenges such as data privacy, copyright issues, and the governance of both advanced and traditional AI applications were discussed.

Participants emphasized the need for a harmonized global approach to AI governance, recognizing the geopolitical and economic competition surrounding AI development. The discussion touched on the importance of infrastructure, particularly in Africa, where limited data centers and skills gaps pose significant challenges.

The role of youth in AI development was highlighted, along with concerns about data ownership and localization. The need for inclusive governance frameworks that involve multiple stakeholders, including youth, was stressed. Participants also discussed the importance of enforcement mechanisms and the potential for tax incentives to encourage compliance with AI governance policies.

The discussion concluded with a call for collaborative efforts in AI governance, emphasizing the need for transparency, partnerships between public and private sectors, and the implementation of voluntary reporting frameworks to inform policy decisions. Overall, the participants agreed on the necessity of a unified, inclusive approach to AI governance to ensure its responsible and beneficial development globally.

Keypoints

Major discussion points:

– Current challenges and risks of AI, including security, bias, diversity, and environmental impact

– The need for inclusive, global AI governance frameworks and standards

– Data and infrastructure gaps between developed and developing regions, particularly Africa

– Balancing innovation with regulation and safety considerations

– The roles and responsibilities of different stakeholders in AI development and governance

The overall purpose of the discussion was to explore different perspectives on the current state of AI governance from government, industry, civil society and youth representatives. The goal was to identify key challenges, priorities and potential paths forward for developing effective global AI governance.

The tone of the discussion was largely collaborative and solution-oriented. Speakers acknowledged both the opportunities and risks of AI, and emphasized the need for cooperation across sectors and regions. There was a sense of urgency about addressing governance gaps, but also optimism about the potential for AI to drive progress if managed responsibly. The tone became slightly more critical when discussing inequalities in AI development and data ownership between Global North and South.

Speakers

– Yoichi Iida: Assistant Vice Minister at the Japanese Ministry of Internal Affairs and Foreign Relations, moderator of the session

– Audrey Plonk: Deputy Director of Science and Technology and Innovation Directorate of OECD

– Thelma Quaye: Director of Infrastructure Skills and Empowerment from Smart Africa

– Leydon Shantseko: Representative from Zambia Youth IGN

– Henri Verdier: French Ambassador

Additional speakers:

– Melinda Claybaugh: Policy and privacy policy director from META

– Levi: Youth representative (full name not provided)

Full session report

AI Governance: A Global Perspective

This comprehensive discussion on global AI governance brought together diverse perspectives from government, industry, civil society, and youth representatives. The session, moderated by Yoichi Iida from the Japanese Ministry of Internal Affairs and Foreign Relations, explored the current status, challenges, and priorities in AI governance across different regions.

Current State and Challenges of AI Governance

The discussion began with a thought-provoking comment from Henri Verdier, the French Ambassador, who questioned whether the AI revolution would truly represent progress for humankind. This framing set the tone for considering the broader implications and ethics of AI development, beyond mere technological advancement.

Speakers highlighted several key challenges in the current AI landscape:

1. Balancing Innovation and Security: Governments face the task of fostering innovation while addressing potential risks and security concerns.

2. Infrastructure and Skills Gap: Thelma Quaye, representing Smart Africa, a pan-African organization, emphasised the lack of necessary infrastructure and skills in Africa to fully leverage AI. She likened AI to water, highlighting its potential to nourish and help societies grow, but also underscoring the need for proper governance.

3. Data Sovereignty and Localisation: Leydon Shantseko, a youth representative from Zambia, raised concerns about data sovereignty and localisation issues, particularly in Africa. He pointed out the practical challenges of implementing data localisation policies when most platforms used are not hosted in Africa.

4. Geopolitical Competition: Henri Verdier highlighted the geopolitical aspects of AI development, framing it as a source of power and intense competition between companies, countries, and international organisations.

5. Open Source AI: The discussion touched on the role of open source AI models in promoting innovation and economic development, while also presenting challenges for governance. Melinda Claybaugh, a policy director from META, elaborated on their open-source AI approach and its implications.

Priorities for AI Governance Frameworks

The speakers agreed on the need for a holistic, international approach to AI governance, but differed in their specific focuses:

1. Inclusive Approach: There was consensus on the importance of multi-stakeholder cooperation in developing AI governance frameworks. This includes involving youth throughout the process, as emphasised by Leydon Shantseko.

2. Reflecting AI Ecosystem Realities: Melinda Claybaugh stressed that governance should reflect the realities of the AI value chain and ecosystem, particularly for open source AI.

3. Building Local Capabilities: Thelma Quaye highlighted the importance of building local African datasets and AI capabilities to ensure relevance and reduce bias.

4. Interoperable Governance Tools: Audrey Plonk from the OECD emphasised the development of interoperable governance tools and reporting frameworks.

5. Balancing Global and Local Needs: The discussion highlighted the need to balance global standards with local needs and perspectives, particularly in developing regions.

6. Enforcement Mechanisms: Thelma Quaye stressed the importance of developing effective enforcement mechanisms for AI governance policies, especially in regions like Africa.

Roles and Responsibilities in AI Development

The speakers outlined various roles and responsibilities for different stakeholders:

1. Governments: Responsible for balancing innovation and security, and creating appropriate regulatory frameworks.

2. Companies: Should be transparent about AI development and associated risks.

3. International Organisations: The OECD is working to provide data and harmonisation across AI approaches, including through its AI observatory and integration of the Global Partnership on AI.

4. Youth: Should be involved in policy-making and allowed to innovate while addressing potential risks.

5. African Nations: Need to increase data infrastructure and sovereignty.

6. Public-Private Partnerships: Collaboration between public and private sectors is crucial for advancing common AI goals, such as developing representative global datasets.

Proposed Solutions and Action Items

Several concrete actions and proposals emerged from the discussion:

1. The OECD is finalising a reporting framework to implement the Hiroshima AI Code of Conduct, aiming to provide interoperability and harmonization across different AI approaches.

2. France is organising an international AI summit in November 2023 to discuss global AI governance, as announced by Ambassador Verdier.

3. Efforts are being made to increase the interoperability of AI governance tools and frameworks across different initiatives.

4. Proposals for using AI and virtual reality to scale up technical and vocational education and training (TVET) skills in Africa.

5. Suggestions for developing partnerships between public and private sectors to advance common AI goals, such as developing representative global datasets.

Unresolved Issues and Future Considerations

Despite the productive discussion, several issues remained unresolved:

1. Effectively balancing innovation with regulation and risk mitigation.

2. Addressing data localisation and sovereignty concerns, particularly for developing regions.

3. Ensuring fair representation and inclusion of diverse perspectives in AI governance discussions.

4. Developing effective enforcement mechanisms for AI governance policies, especially in regions like Africa.

5. Increasing AI adoption across industries, particularly in smaller companies, as the current diffusion rate remains low.

6. Bridging the growing gap between public and private AI research capabilities, including the need for public research to reproduce private sector AI development results.

Conclusion

The discussion highlighted the complex and multifaceted nature of global AI governance. While there was general agreement on the need for comprehensive, inclusive approaches, the speakers’ diverse perspectives underscored the challenges in developing universally applicable frameworks. The conversation emphasised the importance of considering ethical, geopolitical, and developmental aspects alongside technical considerations in shaping the future of AI governance.

As AI continues to evolve rapidly, ongoing dialogue and collaboration between stakeholders will be crucial in addressing emerging challenges and ensuring that AI development truly represents progress for humankind. The upcoming international AI summit in France and the OECD’s work on reporting frameworks represent important steps towards more coordinated global efforts in AI governance. The inclusion of youth perspectives and the focus on addressing regional disparities, particularly in Africa, highlight the importance of a truly global and inclusive approach to AI governance.

Session Transcript

Yoichi Iida: AI Governance. So, my name is Ochi Iida, the Assistant Vice Minister at the Japanese Ministry of Internal Affairs and Foreign Relations. I’m the moderator of this session. I’ll talk about the global AI governance from different perspectives and from different communities such as government, industry, and civil society, including Europe. So, we’re now hearing a little bit of technicals. So, first of all, I’m going to be talking about AI from government of France. She’s a policy, privacy policy director from NATO. And we have one of the speakers from the International Organization, OECD, online, and Audrey Plonk, Deputy Director of Science and Technology and Innovation Directorate of OECD. From civil society, we have two speakers. One is Thelma Nguyen, if I pronounce correctly. She’s Director of Infrastructure Skills and Empowerment from Smart Africa. And also, we have one more speaker from civil, in particular youth community, Mr. Leydon Shantseko. He’s representative from Zambia Youth IGN. So, thank you very much to all of you for joining us. And we will have a very productive session. So, in the beginning, I’d like to invite all five speakers to speak about your views on the general current status and also challenges in AI governance, probably in your domestic AI governance situation, or probably you can talk about the global AI governance situation. And you can also tap on your priorities, what you are most expecting from AI, and what you are doing now. So, I’d like to start with Ambassador Andy Beaudoin.

Andy Beaudoin: Good afternoon, everyone. So, you did ask us a very important question, and you did ask us privately to do it in four minutes, which is impossible. The main challenge is quite simple to say, not to do. Are we sure that this impressive revolution will be a progress? Not just innovation, not just power, but a progress for humankind. And that’s probably the main responsibility of governments, to be sure that we balance innovation and security, economic growth and equality and efficiency and diversity, and to find a good balance. And that’s why we are part of, of course, a global movement with important companies, multi-stakeholder community, but probably as governments, we have at least a responsibility in front of our citizens to pay attention to this. And as diplomats, I am a diplomat, we have a responsibility to do it within an international framework and in conversation with the important multi-stakeholder community. And if we start with the idea of progress, and then I will pass the mic to other speakers, I just want to emphasize that since the beginning of the story of AI, we have had different conversations about, so last year, for example, the main topic was existential risk. Now we are speaking more about, for example, equal development and are we addressing the needs of emerging economies? I think that the most important thing is to start by recognizing that we will have to face a lot of challenges and to try to have a broad vision of the challenges. So security matters. Security is not just AI becoming crazy and attacking humanity. Security is also cyber security. Security is also bias. Are we sure that we did train the model with good data and that we are not just reproducing current inequalities? And security is not everything. We also need to think about cultural and linguistic diversity. I believe that if you don’t have a large language model for your language, your language will disappear as a working language, so as an economic language. So we need to be sure that everyone will have the possibility to enter in this revolution. Diversity doesn’t mean just linguistic diversity, because you need also to train the model with the knowledge of different cultures and to be sure that your point of view, your history, your perception of the world will be taken into account. We are soon to face the question of environmental impact of intellectual property. Are we sure that we still know how to repay creators and creations and build a good framework? Maybe you will tell about this. Maybe we have to find new concepts to protect privacy, because maybe just to protect my personal data is not enough to protect my privacy. And we can continue. Maybe we need a specific policy to be sure that we train enough skills and competencies in emerging economies and they will take a seat in the driver’s seat, not just as consumers. Maybe we have to rethink about education and not just to train engineers, but to be sure that the future citizens will be ready for this new world and there will be free minds and free citizens in the new world. I could continue. I won’t. But the idea is that the holistic approach and the perception of the global question we have to face is, from my perspective, very important. Thank you.

Yoichi Iida: Thank you very much, Ambassador. You talked about a lot of various risks and challenges. In particular, you talked about the security or diversity. Diversity will be very important when AI continues to develop. And also, we need to recognize the importance of the project in the next generation. A lot of risks and challenges were talked about, but at the same time, we also recognize the importance of innovation. So, what about the industry perspective? I would like to now invite Melinda to share your view. Yes, thank you so much. Can everyone hear me? Yes. Okay, great. So, just a bit of context to set the stage for my perspective from META. Two things. One is that in the AI space, we are very much an open AI company. Not open AI, but an open source AI company. And what that means is that we are all in on providing our AI technology on an open source basis. So, our large language model, Lama, we have different versions of it, but it’s made available to anyone to download for free. And this, in our view, is the best way forward in terms of approaching AI innovation for a few reasons. One being that for developers, it is the most valuable and flexible option for them to build on and be able to customize applications to their local needs, fine-tune the way they would like to with the data that they want to. It also is the best, we think, from an economic development perspective. Being able to provide a really diverse ecosystem of AI tools to developers and to countries is going to have the most benefits from an economic perspective because it won’t be locked into a few companies that are providing closed models. And then finally, it benefits us as a company because we won’t be beholden to other operating systems, so to speak, that people will be building on our technology, which is a benefit to us. And we also come at it from the perspective of a company that has supported and signed on to various AI global frameworks in the last few years. last year, including the sole AI, Frontier AI commitments, which will require us to publish an AI safety framework before the AI summit in France next year as an early adopter also of, really supportive of the G7 code of conduct. And so that’s our perspective. And what I’ve seen happening, I think in the AI governance landscape, there are some positives and some challenges. I think the positives are that we’ve seen a real harmonization in the AI safety conversation at the global level. So there’s a increased understanding of the safety risks. There’s an increased understanding of the steps that we need to take to mitigate those risks. And more importantly, I think a firm understanding that we need to have a harmonized global approach to this global technology. I think some of the challenges, however, that we’re seeing are that there’s a lot of conversations happening that are not necessarily relating to each other. So while we have international agreement on the safety conversation, as the ambassador pointed out, there are other conversations happening. So there’s the data conversation around data privacy and the use of data. There’s the copyright conversation that’s happening. There’s the conversation around the governance of all AI, not just advanced AI, but kind of our classic AI, and how do we guard against the risks and harms from regular AI when it’s used to make decisions that affect people’s lives. And so I think those are some, of course, there’s the industry, there’s a lot of industry standards being developed that are important in different ways. And then there’s the conversation around the AI safety institutes, which I think in a positive development are being stood up around the world that will help with the science of AI and the evaluations and benchmarks that should be looked to for AI governance. So I think the question is going to be how to tie a lot of these things together as they deepen, as the science deepens, as the industry standards deepen, as the global frameworks deepen, how do we connect these pieces to make sure that they talk to each other? And then finally, the point I wanna make about one of our priorities that we have in terms of the AI governance conversation is how do we reflect in our governance frameworks the realities of the AI value chain, and particularly open source AI? So what do I mean by that? Well, so what we need to do is reflect the different roles that the actors in the AI value chain and ecosystem have to play, and those are unique and different roles. So what the model developer has within its responsibility in terms of safety mitigation, risk mitigation, then what role does the deployer of the model play? And then the downstream developers, all of these players have unique roles and responsibilities. And I think as we look at a comprehensive kind of governance framework for the ecosystem, we need to take that into account. So speaking from the open source perspective, we don’t have the control and visibility into the downstream uses of the model that a closed model provider might, simply because anyone can use our model for any purpose. In that case, then, what are the responsibilities of the developers who are developing the applications for very specific use cases? And so I think we need to bring that complexity to the conversation to make sure that we’re addressing, using the right tools in the toolbox to address the harms that might arise. Thank you. Thank you very much, Maylene, to cover it. A lot of things actually, you know, in divergence or maybe in parallel things are going on as the international discussions on governance frameworks and whatever, risk assessment. And this, I thought, will be the second topic in our discussion. But before that, I would like to, to invite the other speakers to share your overviews on the general fundamental understanding on the current situation. The previous two speakers covered a lot of elements like risk and challenges and also the opportunities and diversity and inclusiveness will be also a very important part. And now I’d like to invite Thelma from some African view when we talk about AI governance, what we need to prioritize and how you regard the current situation.

Thelma Quaye: Thank you very much. Good evening, everybody. So I’d like to clarify, Smart Africa is not a multinational organization, basically, we are also a pan-African organization working across Africa, supporting governments together with private sector in the digital transformation. Now, my perspective of AI from the African context, I would add, I’m liking AI to water. Water sort of nourishes us, it helps us, it helps us to grow our crops. In the same way, AI also helps us to be more efficient. It helps us to digest a lot of information and helps us to leapfrog. I’d like to emphasize on the leapfrog in South Africa. You would know that, for instance, a number of Africans are not behind in terms of education, in terms of health, in terms of transport, for instance. But if I take countries like Rwanda, Rwanda is very, very similar to the help of AI in our dreams. That supplies, for instance, to rural Rwanda. These are locations that it’s very hard to reach by car. And we are seeing an improvement in mortality rates, for instance, and this is based on AI. We are using AI in precision agriculture, for instance, in Rwanda because of Rwanda, we have new use cases. And these are things that we would not have been able to do leveraging on our current infrastructure. So for us, indeed, AI is a way to leapfrog. Just like the duality of water, it also is a part of, if we don’t govern it, then what happens? If we don’t, if we don’t know what’s going to happen, it’s going to be disastrous. It’s a duality that in 2000, in 2000 years to come, we’re going to take a lot of love. So we don’t have AI, we don’t do it from a consulting point of view. But it doesn’t mean that, don’t always remember that. We have the AI, the African Union, come up with the continental AI strategy. We’ve had, and as much as we’ve got also the blueprint, some countries like Ghana, some of the national strategies, some of the international strategies. So there are efforts towards that. The question still remains, whether this approach we are taking is going to be used. Are we going to be using it? Is it a multi-stakeholder approach? What is the strategy leading to the failure that the African Union has developed against us? It’s very important that we come together, first of all, that is harmonized, but also is fair and ethical. I think my other colleagues have spoken about it, and inclusive. For us, if AI is going to help us to leapfrog, we need to be able to make it be inclusive as well. In terms of the challenges, within the African perspective, we know that the bedrock of AI is data, right? You need a lot of data to be able to properly utilize AI. But the number of data centers in Africa equals the number of data centers in Ireland. Look at the population of Africa, 1.4 billion, and compared to the population of Ireland. So we need to increase the infrastructure. We could say that we will leverage on other people’s infrastructure, which is what we are doing now, but I believe that it takes some sort of sovereignty from us. And if we are talking about ethical AI, fair AI, using AI within your context, it’s important that data is also within our jurisdiction so we are able to properly leverage it and make sure that it’s sovereign. The second for us is the skills gap. I think five years ago, there was a craze about creating a lot of coders, but now AI can also code. So what do we do now? to be able to, I believe, for instance, that what we need in Africa is to use AI to increase and scale TVET skills. So for instance, we are talking about electric cars. We are talking about assembling phones, for instance, in Africa. Why can’t we use virtual reality generative AI to create classrooms for people to learn these skills, for instance, because we are not able to scale it up, but we can leverage AI to scale it up. But also, even within the space of software development, we still need people who can develop this AI and the robots and the rest. And the last one is also the data sets. And I think it speaks to the biases that my colleagues have spoken about. When most of the AI tools we have, have data sets that are from other regions, we do not have the data sets in Africa. So that’s one key thing that we also need to focus and build a data set so that the AI is African. It’s to our story, like we did with M-PESA. We can leverage on our own story to tell our story, because AI is as intelligent as the data you feed it. So if you feed it with other cultures, other data sets, it will always go against us. And we’ve seen some of them where we’ve applied AI tools from let’s say banking, AI banking tools from other regions that are rejecting people applying for loans just because they are of a certain race or of a certain gender. So it’s important for Africa within that perspective. I know it sounds very nascent, but it is what it is. We need to be able to create those data sets. Thank you.

Yoichi Iida: Okay, thank you very much. Very deep thoughts about the current situation of Africa and also challenges of Africa. I think many of those are shared by other people from around the world. But we also, your talk also reminded me of the speech by Minister Al-Sabaha in the opening session and also other speakers. Actually, it was very much impressive to hear many speakers talked about AI in the opening of Internet Governance Forum. So everything is emerging together and everything is related and reflected to each other. But we need to read all one after another to make the best benefit from technologies. And we need to future proof. So from that perspective, I would like to ask the viewpoint of youth generation from Levi. Thank you, Aida.

Speaker 2: I’m sure you can hear me, right? So from, I think maybe in addition, because what some of the things that the previous speakers have raised are valid and great concerns. Of course, there’s a dual aspect of it, the good and the bad. Talking about the youth perspective, I think a number of youths give a check. They have contributed largely to the development of artificial intelligence. Be it from the innovative stage and also how some of the systems are working. And the duality of it is others have used it to, for lack of a better term, like to cheat their way through, to leapfrog. But others are using it holistically and based on honest and truth, as well as transparency and accountability. Now, when you look at it from, maybe touching a bit back in Africa, majority of the youths that I think have majority of the youths that I think have used AI have among other things, been a source of data. We talk about data mining, for instance, most of the AI tools have been trained by a number of youths. Let me give an example of recently, I think some Kenyan youths were protesting because the amount they were getting to feed data systems was way lower compared to the amount of data the amount of money that these data systems are going to generate from it. So it’s an issue of the balance between you have the global North developing most of the AI systems, but the global South is being used to train and less investment in the global South is coming in. Thelma mentioned and raised an issue of data centers versus the number of the population in terms of how many people are actually using local data centers based on these advanced AI systems. A good example is I think one of the arguments that we’ve had is we have most of the platforms and systems and technologies being used by majority of Africans not developed in Africa, which means data localization doesn’t really exist in context. It can exist on paper, but in reality, data localization is not working. So in as much as we can talk about data governance, data regulation, we’re building data acts, data protection abuse based on the data that we actually don’t host, which kind of I think created disparity because you have a data act supposing to govern database and localized data, but most of the data that we’re actually working and operating on is not being hosted on local platforms. So you have a bigger gap in terms of governance of data in Africa based on the data that we actually are not hosting in the first place. So it comes back to who then hosts the most of the data and how do we create local acts or views or laws to govern the data that we’re using when it’s actually not being hosted in Africa. So when you talk about what should be then the priorities and the expectations from the youth perspective is how much influence do we have on the data that we are producing at a local level when it’s being hosted outside of Africa? Because give or take, I can use this as an example, majority of governments and civil society organizations that are based in Africa are not using local tools. For example, I think Microsoft is one of the big techs that most African countries and governments and civil society are using as a platform. Microsoft 360, a good case in point, right? We use most of the data and it’s not stored in Africa at all. Most of the data centers are not based in Africa yet the data that is being received from Africa is something that we expect to input into AI and we’re talking about governance. So I think when it comes to youth perspective, looking forward is how then do we create a balance between globalized data versus governing data from the local perspective and how much benefit then does African countries have on the data they are giving AI systems which are not being hosted and leveraged from the African perspective. I think that’s pretty much from the youth perspective aside from just adoption of AI by majority of the youth in Africa. I’ll end here for now.

Yoichi Iida: Okay, thank you very much. Very interesting perspective and it covers a lot of things. But now it’s very interesting to see, we gather for internet and we talk about AI and now we are talking about the data. So everything is related, of course, to each other and probably we also need to talk about infrastructure such as data center or computing power. And also Melinda talked about the supply chain and the business is trying to reflect the governance discussion onto the reality of supply chain. But it’s very interesting because from the government perspective, we are trying to reflect the reality of supply chain to the discussion of policy making. So it can be a kind of a very healthy mutual interaction but if it fails, the future will not be very much productive. It’s very interesting. And as government, we have been making a lot of efforts in developing a governance framework such as Melinda talked about the G7 Hiroshima Code of Conduct which we spent a lot of time with Ambassador Verdi and other colleagues from G7 countries. And also over the last few years, we had European convention, AI convention of European Council and also we had EU AI Act in place and also we had the global partnership was integrated with OECD AI community. And a lot of things happened over the last one or two years and everything was connected to actually OECD and my partner Audrey Prong had been looking after probably everything and she knows everything I believe. And I would like to ask for her comment on overview of this current situation and also the point that previous speakers touched upon. So Audrey, please.

Audrey Plonk: Thank you, Yoichi for the kind introduction and hi to everyone. Sorry to not be with you in rehab today but thank you so much for having me and the OECD on the call. panel. I’ll be brief so that we can have more discussion. But just to say, we’ve had, you know, of course, in the last couple of years, I’m sorry, we cannot hear you get a moment. Can you try again? Does this work? No? No, I’m sorry. Okay, so technical people is working on this, so please wait, and I will speak to you once again later. And now, so we talked about a lot of things, and we have touched upon data or infrastructure, and, you know, we talked about the challenges and risks, of course, as well as opportunities. And also, we heard the views from different communities. So now, we do not have enough time, but I would like to invite all the speakers to make a comment on your perspective

Yoichi Iida: about the responsibilities or roles, or what you are planning to do, or as your own community, such as business, government, industry. Okay, so I would like to invite for the last comment from all speakers, but before that, I would like to invite Audrey to try again.

Audrey Plonk: Does it work now? Okay, now I can hear you. Oh, wonderful. Thank you. I think maybe I was in the observer room and not the speaker room. Anyway, thank you for the kind introduction, Yoichi, and hi, everyone. Sorry to not be with you in person, but I’m sure you’re having a great IGF in Riyadh. Just on behalf of the OECD, we’ve obviously seen a lot of changes to the global internet governance space in the past five years, since we initially adopted our first AI recommendation, which we just revised earlier this year. And of course, as Yoichi mentioned, we’ve had the emergence of safety institutes, we’ve had changes just recently here at the OECD with the integration of the global partnership on AI into our work program, and the emergence of a lot of different policy topics, many of which other speakers talked about. Just to give a couple of examples, we know that issues of data and AI are super important. They’re critical. Everybody has said that in their own way. And our expert community here at the OECD has more than 400 people. And one of the more recent things we did in the last six months is create a group focused on privacy and data and AI. So just to say that I think the topics of the table and the issues that you’re coping with in different regions around the world, we see very much across the community that we work in, which is a broad global community. I would just say that if you’re not familiar with the OECD’s AI observatory at oecd.ai, it’s really the place where we’re trying to put as much data and evidence behind trends that are happening in AI, trends like what kind of language modelers are being built on what languages, so that policymakers can look at that data and start to shape a policy environment that implements the broad principles that I think we all agree on, things that have been mentioned before already, like around quality and fairness, around bridging divides and other things. So if you haven’t checked out the observatory, it’s a great place to look at things like where research cooperation is happening across countries, where patents are being filed, where investment is going into AI. And I think as we build that out, we have right now 70 different jurisdictions participating in the observatory and invite many more to come to come join us. I have a colleague in the room that you can talk to since I’m not there. But a big part of what we’re trying to do at the OECD is to provide as much interoperability and harmonization across different approaches, whether those be technical standardization approaches or policy approaches. That’s, I think, many people said the importance of us operating on a global space and in a global way. And we’re trying to bring our analytical and data-driven approach to AI. And I think, you know, just as we move forward into 2025, 2024 has been a very jam-packed year of AI and a lot of focus on important issues of safety, frontier models. But I also note the importance that others have said about maybe not frontier models, just the day-to-day integration of AI. And I’ll just close by saying some of the data we released earlier this year shows just how much runway there is for AI to diffuse or to be adopted across industries for a lot of potential benefit. And so, I think, you know, at best we see about an 8% diffusion rate of AI technologies and mostly in large companies. So, to make AI both more accessible and more widely adopted, we think, and we know it has to be trustworthy, but also that we have to put some of these other framework conditions in place around safety, security, and fairness to make those numbers around diffusion go up. So, I look forward to the rest of the discussion. And thank you, Yoichi.

Yoichi Iida: Thank you very much, Audrey, for the comment. And I think the remaining time is very much limited, but I would like to hear one more voice from all speakers. And we always hear, you know, inclusivity when we talk about AI governance. And we had the GDC and a lot of people are talking about AI gaps, AI divide. So, now, I think OECD is a kind of a small group with 38 member countries, but now after integrated with the GPA, they have more than 40 members, and they are welcoming more. So, there will be a more inclusive group for AI discussion. But I think France has a similar perspective on AI governance and when you are organizing action summit next year. So, I would like to ask Ambassador what will be the objective and goals of AI action summit? You have just two minutes. Hello, hello. Yes. So, in two minutes. First, maybe there is something we didn’t say enough

Andy Beaudoin: in this room, and maybe not within the IGF itself. Of course, AI is not just a very promising technology. That’s also a source of power and an intense competition between companies, they want to take the lead. That’s also a geopolitical competition between models and that’s a competition between international organizations to take the lead. We have to face this. If we don’t recognize this, we won’t do a good job. And for us, for France, for a lot of people, one of the big threats regarding the future of AI and the future of AI governance would be a fragmentation of the global governance. Because if we let a fragmentation happen, we will engage a race to the bottom, a race to the worst. Everyone, to remain strong within the competition, will propose the weakest regulation or governance. So, we have to stick together, we have to exchange. So, yes, of course, we think that the political framework of the OECD is of the utmost importance. We discuss, we integrate, we agree, and we are doing a great job within the GPI and with the OECD. But we think that we need also a universal conversation. So, the Paris Summit, February 10 and 11, so soon now, in two months, will be probably the biggest international summit ever so far. So, we did invite 110 heads of state and governments, and we do expect something like 80 heads of state and governments, and most of the heads of international organizations. And we will propose an agenda around what is a good but broad governance regarding all the holistic approach I did mention earlier. And that will be also a very intense multi-stakeholder conversation. So we do expect something like 1,000 or 2,000 delegates from research, from industrial sector, from private sector, from civil society. And we try to put the conversation about three main topics, or maybe four, because risk, security, safety institute, and even the question about catastrophic risk or existential risk still matters. But we will add three layers. Conversation regarding sustainable AI, because let’s try not to break the planet again with this new technology, unpromising technology. A conversation about, yes, broad governance, addressing all the concerns I did mention earlier. And a conversation about the needs of, what we don’t know so far, big builds and digital public infrastructure, because frankly, we don’t want these movements to be completely privatized. We need also public resources. Just to conclude with this, you know, I travel a lot. I meet a lot of researchers everywhere, including at the biggest American universities. Today, let’s face it, big research cannot reproduce the results of the biggest companies. So that’s not your fault, you are not responsible. But we want, we need a world where there is a common knowledge and where public research can reproduce at least, or maybe preempt, but at least reproduce the results of the private sector. So for this problem, we don’t have to finance, we need more money in the public sector. Okay. Thank you very much.

Yoichi Iida: So we have five minutes left, but I would like to hear one voice, it’s from one of the other speakers. Melinda, what is your expectation and what do you think you can do in developing AI governance and also AI ecosystem?

Speaker 1: Thank you. So just a couple of things I want to touch on. I think companies have significant responsibility here, clearly. I think particularly around participating in the international frameworks and initiatives that are being developed and hearing to them, working with safety institutes in their home countries to develop the research and evaluations of advanced models. I think being transparent with everyone about how their large models are developed, what they’re capable of, what the risks are, how they’re addressing risks, all of those things I think are really important and squarely on the shoulders of developers. And then I think there’s a lot of partnerships that need to be developed in terms of public, private sector, whether it’s on research capabilities, whether it’s working to develop data that’s going to be representative of the entire world, that is working with the Gates Foundation, which is not the government, but to develop African data for training. So how can we partner together to advance some of the common goals? I think that is going to be really important going forward. And I think we’re starting to get a sense of what the real needs are and the real opportunities. And then no one can do this alone, right? So everyone has their interests to further, but we need to be doing it together. So I think really getting a clear understanding of where we can partner together to advance the governance, I think, is the next phase. Thank you very much. That will be very important. And I’m going to go back to you, Aisha, and then I’ll turn it over to you. Thank you. So I’ll speak from the governance perspective. One of the key things or the key differences we’ve seen is that there’s a disparity between the governance and data and enforcement. So within the African perspective, we need to come up with ways of enforcing so that it’s effective. So there’s no point in spending so much time developing policies, governance, and enforcing. And on that, we are looking at things like giving tax breaks to companies who are compliant, setting up authorities who are autonomous, those kind of things. So enforcement is a key thing to consider. But also multi-stakeholder. I believe that AI brings the world together even more than the internet. And so there’s a need to have a universal approach to AI governance. And we fully support what was said in terms of bringing everybody, having a universal approach. And that’s what we do also at Smart Africa, to bring private sector, civil service organizations to the government. But it’s only Africa. And so we need to also to go beyond Africa to put a sort of a universal model. Thank you.

Yoichi Iida: Thank you very much. Actually, you know, enforcement is very important. And actually, this is what we are working very hard on with Audrey. Before I invite Audrey to comment on this, I invite Lidl to share what is their expectation. What do you think you can do?

Leydon Shantseko: The first one is not to be used in most of the conversation, especially when it comes to governance. I’m not very confident in as much as I have an idea of how other continents or other regions have done with regard to youth involvement. But with regards to Africa, partly there is a feeling among youths that governance, the most thing comes in to try and reduce or regulate innovative ideas before they actually thrive. So it’s something that I think from the youths, the core is for government to engage youths with an open mind so that they allow for innovation to thrive and then the safeguards in terms of what are the risks can then be talked about with the youths in the meetings or in the room. And like talking about reducing certain access, because at the end of the day, it looks more of we are trying to actually come up against innovation by bringing regulation and also the tendency to regulate something before fully understanding how big it is. It’s just, I think, one of the challenges. Secondly, it’s allowing for youths to continue innovating, even when we are talking about emerging technology. In the context that youths, I think, have been at the heart of most of the innovative ideas. So when we talk about AI and emerging technologies, we don’t have to think of the youths in terms of not knowing something, but allowing them the benefit of doubt to take the risk and allowing them to thrive and grow. Because most of the technical ideas or some of the infrastructure or developments we have as foundations, most of them were actually developed by the youths. So involving them, I think, from the start all the way to the end of the process is something that I would recommend. Or maybe also, let me say this in front of the ambassador, considering the number of heads of state who are being invited, the question is how are they doing against the youths? Because if it’s governments mostly making these policies, there’s a high chance that youths’ perspective are mostly left out in the room. And then youths are involved at a later stage when they are behind most of the innovation. So I think trying to create a balance between government perspective versus the youths’ perspective in most of the governance process becomes very critical. Thank you.

Yoichi Iida: Thank you. Sounds good. So thank you very much for the comment. And now we heard the enforcement and innovation. And I would like to invite Audrey to talk about the conclusion on behalf of myself.

Audrey Plonk: Thanks, Riti. I just want to say that governance is a lot more than regulation. Regulation is really important, but governance can include other tools. And I just want to not miss the opportunity to talk about one that we hope to be finalizing very soon this week, which is an implementation framework or a reporting framework to implement the Hiroshima AI Code of Conduct. And the purpose of this framework is to allow companies and institutions, organizations to report publicly on their activities related to the code of conduct so that we can take voluntary tools and move them from just sort of nice words on a paper to building out an ecosystem of information that can inform policy decisions. Because I think the prior speaker said rightly that, you know, we operate often in a vacuum. Those of us who work on AI day in and day out, we may know a lot, but there’s a lot of stuff we don’t know about AI. And it’s very hard to implement a good governance regulation in that vacuum. And so I think filling that vacuum in the void is an important step. And as part of that process to develop this reporting framework for the Hiroshima Code of Conduct implementation, we’ve also mapped this code of conduct to many other codes of conducts. And I think there’s a way to make these different. systems, you know, for the organizations that we’re going to be asking and hoping adopt and start to engage in this governance dialogue and more than just talking, which is also really important, but in more of information sharing in a concrete way that can help inform researchers and the public, that we make these tools as interoperable as possible. And that that’s not a negative thing for the world, it’s instead advancing a system whereby we can compare and in an, as we say, in an apples to apples way, we can compare like things together to see what’s happening at a global, at a global scale. So hope to be able to announce good news there. I know you hope so too, Yoichi, as we really work to implement some of these extremely important activities in the governance space that have taken place over the last couple of years and try to move them from, you know, out of the negotiating rooms and into the practical implementation phase. So thank you so much. Okay. So, if the General Code of Conduct is put in place in action together with a monitoring mechanism, that would be a very much experimental mechanism where the private sector and the government will work together to ensure safety, security, and trustworthiness of AI systems in voluntary work. So we know that is not the only answer, but we are making a lot of efforts to build up a governance framework, which should be inclusive and trustworthy. And we, of course, understand that this effort should be done in collaboration among the different stakeholders, such as not only the government, but the industry, civil society, academia, youth, and others. So I hope the discussion was very much productive and helpful to the audience, and I hope we continue working together towards being open, free, and not fragmented, and translating AI ecosystem globally. So thank you very much to all the speakers, and also thank you very much to the audience. Very much sorry about the audio system, but I hope you enjoyed the discussion. So thank you very much. Thank you.

Yoichi Iida: Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thanks. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

A

Henri Verdier

Speech speed

128 words per minute

Speech length

1085 words

Speech time

505 seconds

AI as a source of progress and power requires balanced governance

Explanation

AI is not just a promising technology but also a source of power and intense competition between companies and countries. There is a need for balanced governance to ensure AI benefits humanity while addressing potential risks.

Evidence

The speaker mentions the upcoming Paris Summit in February, which will be a large international summit with 80+ heads of state to discuss AI governance.

Major Discussion Point

Current State and Challenges of AI Governance

Need for holistic, international approach to address diverse challenges

Explanation

A holistic approach to AI governance is necessary to address various challenges including security, cultural diversity, environmental impact, and intellectual property. The speaker emphasizes the importance of a universal conversation on AI governance.

Evidence

The Paris Summit will propose an agenda around broad governance, addressing topics like sustainable AI, digital public infrastructure, and the needs of emerging economies.

Major Discussion Point

Priorities for AI Governance Frameworks

Agreed with

Speaker 1

Thelma Quaye

Yoichi Iida

Agreed on

Need for inclusive and international approach to AI governance

Differed with

Speaker 1

Differed on

Approach to AI governance

Governments responsible for balancing innovation and security

Explanation

Governments have a responsibility to balance innovation and security, economic growth and equality, and efficiency and diversity in AI development. There is a need to ensure that AI progress benefits humankind as a whole.

Major Discussion Point

Roles and Responsibilities in AI Development

Agreed with

Speaker 1

Leydon Shantseko

Agreed on

Importance of balancing innovation and regulation

S

Speaker 1

Speech speed

145 words per minute

Speech length

461 words

Speech time

189 seconds

Open source AI models promote innovation and economic development

Explanation

Open source AI models provide flexibility for developers to customize applications to local needs. This approach is seen as beneficial for economic development by creating a diverse ecosystem of AI tools.

Evidence

The speaker mentions their company’s large language model, Lama, which is made available for free download.

Major Discussion Point

Current State and Challenges of AI Governance

Agreed with

Henri Verdier

Leydon Shantseko

Agreed on

Importance of balancing innovation and regulation

Governance should reflect realities of AI value chain and ecosystem

Explanation

AI governance frameworks need to reflect the different roles and responsibilities of actors in the AI value chain. This includes model developers, deployers, and downstream developers, each with unique responsibilities in risk mitigation.

Evidence

The speaker discusses the differences between open source and closed model providers in terms of control and visibility into downstream uses.

Major Discussion Point

Priorities for AI Governance Frameworks

Agreed with

Henri Verdier

Thelma Quaye

Yoichi Iida

Agreed on

Need for inclusive and international approach to AI governance

Differed with

Henri Verdier

Differed on

Approach to AI governance

Companies should be transparent about AI development and risks

Explanation

Companies have significant responsibility in AI governance. They should participate in international frameworks, work with safety institutes, and be transparent about their large models’ development, capabilities, and risks.

Evidence

The speaker mentions their company’s support for various AI global frameworks and commitment to publish an AI safety framework.

Major Discussion Point

Roles and Responsibilities in AI Development

T

Thelma Quaye

Speech speed

150 words per minute

Speech length

924 words

Speech time

368 seconds

Africa lacks necessary infrastructure and skills to fully leverage AI

Explanation

Africa faces challenges in leveraging AI due to a lack of data infrastructure and skills. The continent needs to increase its data center capacity and develop AI-related skills to fully benefit from the technology.

Evidence

The speaker mentions that the number of data centers in Africa equals the number in Ireland, despite the vast population difference.

Major Discussion Point

Current State and Challenges of AI Governance

Importance of building local African datasets and AI capabilities

Explanation

There is a need to create African datasets and develop local AI capabilities to ensure AI is relevant and beneficial to the African context. This is crucial for addressing biases and ensuring AI tools are appropriate for African needs.

Evidence

The speaker gives an example of AI banking tools from other regions rejecting loan applications based on race or gender.

Major Discussion Point

Priorities for AI Governance Frameworks

Differed with

Leydon Shantseko

Differed on

Focus of AI development in Africa

Africa needs to increase data infrastructure and sovereignty

Explanation

Africa needs to increase its data infrastructure and ensure data sovereignty. This is crucial for leveraging AI effectively and ensuring that AI governance is relevant to the African context.

Evidence

The speaker mentions the need for effective enforcement of AI governance policies in Africa, suggesting measures like tax breaks for compliant companies.

Major Discussion Point

Roles and Responsibilities in AI Development

Agreed with

Henri Verdier

Speaker 1

Yoichi Iida

Agreed on

Need for inclusive and international approach to AI governance

L

Leydon Shantseko

Speech speed

172 words per minute

Speech length

405 words

Speech time

141 seconds

Youth perspective highlights data sovereignty and localization issues

Explanation

The youth perspective emphasizes issues of data sovereignty and localization in Africa. There is a concern about the lack of local data centers and the implications for data governance when most data is hosted outside Africa.

Evidence

The speaker mentions that most African governments and organizations use platforms like Microsoft 365, with data stored outside Africa.

Major Discussion Point

Current State and Challenges of AI Governance

Youth involvement needed throughout AI governance process

Explanation

There is a need for greater youth involvement in AI governance processes. The speaker argues that youths should be engaged with an open mind, allowing for innovation to thrive while addressing potential risks.

Evidence

The speaker mentions the perception among youth that governance often comes in to regulate innovative ideas before they can thrive.

Major Discussion Point

Priorities for AI Governance Frameworks

Youth should be allowed to innovate while addressing risks

Explanation

The speaker advocates for allowing youth to continue innovating in emerging technologies like AI. There’s a call to give youth the benefit of the doubt to take risks and grow, while still addressing potential risks.

Evidence

The speaker points out that youths have been at the heart of most innovative ideas and have developed many of the foundational technologies we use today.

Major Discussion Point

Roles and Responsibilities in AI Development

Agreed with

Henri Verdier

Speaker 1

Agreed on

Importance of balancing innovation and regulation

Differed with

Thelma Quaye

Differed on

Focus of AI development in Africa

A

Audrey Plonk

Speech speed

139 words per minute

Speech length

1433 words

Speech time

616 seconds

OECD working to provide data and harmonization across AI approaches

Explanation

The OECD is working to provide data and evidence on AI trends through its AI observatory. This effort aims to help policymakers shape policy environments that implement broad principles like quality, fairness, and bridging divides.

Evidence

The speaker mentions the OECD AI observatory at oecd.ai, which provides data on trends like language model development, research cooperation, and investment in AI.

Major Discussion Point

Current State and Challenges of AI Governance

Developing interoperable governance tools and reporting frameworks

Explanation

The OECD is working on developing interoperable governance tools and reporting frameworks. This includes an implementation framework for the Hiroshima AI Code of Conduct, aimed at allowing organizations to report on their activities related to the code.

Evidence

The speaker mentions the upcoming finalization of a reporting framework to implement the Hiroshima AI Code of Conduct.

Major Discussion Point

Priorities for AI Governance Frameworks

Y

Yoichi Iida

Speech speed

121 words per minute

Speech length

2134 words

Speech time

1055 seconds

Multi-stakeholder cooperation needed to advance governance

Explanation

The speaker emphasizes the need for collaboration among different stakeholders to build an inclusive and trustworthy AI governance framework. This includes cooperation between government, industry, civil society, academia, youth, and others.

Major Discussion Point

Roles and Responsibilities in AI Development

Agreed with

Henri Verdier

Speaker 1

Thelma Quaye

Agreed on

Need for inclusive and international approach to AI governance

Agreements

Agreement Points

Need for inclusive and international approach to AI governance

Henri Verdier

Speaker 1

Thelma Quaye

Yoichi Iida

Need for holistic, international approach to address diverse challenges

Governance should reflect realities of AI value chain and ecosystem

Africa needs to increase data infrastructure and sovereignty

Multi-stakeholder cooperation needed to advance governance

Speakers agree on the importance of a comprehensive, global approach to AI governance that involves multiple stakeholders and addresses various challenges across different regions.

Importance of balancing innovation and regulation

Henri Verdier

Speaker 1

Leydon Shantseko

Governments responsible for balancing innovation and security

Open source AI models promote innovation and economic development

Youth should be allowed to innovate while addressing risks

Speakers emphasize the need to foster innovation in AI while also addressing potential risks and security concerns through appropriate governance measures.

Similar Viewpoints

Both speakers highlight the challenges faced by Africa in terms of AI infrastructure, data sovereignty, and the need for local development of AI capabilities.

Thelma Quaye

Leydon Shantseko

Africa lacks necessary infrastructure and skills to fully leverage AI

Youth perspective highlights data sovereignty and localization issues

Both speakers emphasize the importance of transparency and responsibility in AI development, whether from companies or governments.

Henri Verdier

Speaker 1

Companies should be transparent about AI development and risks

Governments responsible for balancing innovation and security

Unexpected Consensus

Importance of youth involvement in AI governance

Leydon Shantseko

Yoichi Iida

Youth involvement needed throughout AI governance process

Multi-stakeholder cooperation needed to advance governance

While youth involvement might not typically be a primary focus in AI governance discussions, both speakers emphasized its importance, suggesting a growing recognition of the need for diverse perspectives in shaping AI policies.

Overall Assessment

Summary

The speakers generally agree on the need for a comprehensive, inclusive approach to AI governance that balances innovation with risk mitigation. There is consensus on the importance of international cooperation, addressing regional challenges, and involving diverse stakeholders, including youth.

Consensus level

Moderate to high consensus on broad principles, with some variation in specific focus areas. This level of agreement suggests potential for collaborative efforts in developing global AI governance frameworks, but also highlights the need for tailored approaches to address region-specific challenges, particularly in developing areas like Africa.

Differences

Different Viewpoints

Approach to AI governance

Henri Verdier

Speaker 1

Need for holistic, international approach to address diverse challenges

Governance should reflect realities of AI value chain and ecosystem

While Andy Beaudoin emphasizes a holistic, international approach to AI governance, Speaker 1 focuses on reflecting the realities of the AI value chain and ecosystem in governance frameworks.

Focus of AI development in Africa

Thelma Quaye

Leydon Shantseko

Importance of building local African datasets and AI capabilities

Youth should be allowed to innovate while addressing risks

Thelma Quaye emphasizes the need for building local African datasets and AI capabilities, while Leydon Shantseko advocates for allowing youth to innovate freely in AI development.

Unexpected Differences

Data localization and sovereignty in Africa

Thelma Quaye

Leydon Shantseko

Africa needs to increase data infrastructure and sovereignty

Youth perspective highlights data sovereignty and localization issues

While both speakers address data sovereignty in Africa, Leydon Shantseko unexpectedly highlights the youth perspective on this issue, emphasizing the practical challenges of data localization when most platforms used are not hosted in Africa.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI governance, the focus of AI development in Africa, and the practical implementation of data sovereignty.

difference_level

The level of disagreement among speakers is moderate. While there is general agreement on the importance of AI governance and development, speakers differ in their specific approaches and priorities. These differences reflect the complexity of AI governance and the need for tailored approaches to different contexts, particularly in developing regions like Africa. The implications suggest that a one-size-fits-all approach to AI governance may not be effective, and that balancing global standards with local needs and perspectives will be crucial.

Partial Agreements

Partial Agreements

All speakers agree on the need for comprehensive AI governance, but they differ in their approaches. Henri Verdier emphasizes a holistic international approach, Speaker 1 focuses on reflecting the AI value chain realities, and Audrey Plonk highlights the OECD’s role in providing data and harmonization.

Henri Verdier

Speaker 1

Audrey Plonk

Need for holistic, international approach to address diverse challenges

Governance should reflect realities of AI value chain and ecosystem

OECD working to provide data and harmonization across AI approaches

Similar Viewpoints

Both speakers highlight the challenges faced by Africa in terms of AI infrastructure, data sovereignty, and the need for local development of AI capabilities.

Thelma Quaye

Leydon Shantseko

Africa lacks necessary infrastructure and skills to fully leverage AI

Youth perspective highlights data sovereignty and localization issues

Both speakers emphasize the importance of transparency and responsibility in AI development, whether from companies or governments.

Henri Verdier

Speaker 1

Companies should be transparent about AI development and risks

Governments responsible for balancing innovation and security

Takeaways

Key Takeaways

AI governance requires a balanced, holistic international approach to address diverse challenges and opportunities

There is a need for inclusive, multi-stakeholder cooperation in developing AI governance frameworks

Infrastructure, skills, and data sovereignty gaps exist, particularly in Africa and developing regions

Open source and transparent AI development can promote innovation and economic development

Youth involvement throughout the AI governance process is crucial

Resolutions and Action Items

OECD working on finalizing a reporting framework to implement the Hiroshima AI Code of Conduct

France organizing an international AI summit in February 2024 to discuss global AI governance

Efforts to make AI governance tools and frameworks more interoperable across different initiatives

Unresolved Issues

How to effectively balance innovation with regulation and risk mitigation

Addressing data localization and sovereignty concerns, particularly for developing regions

Ensuring fair representation and inclusion of diverse perspectives in AI governance discussions

How to enforce AI governance policies effectively, especially in regions like Africa

Suggested Compromises

Using voluntary reporting frameworks and codes of conduct as a middle ground between strict regulation and no oversight

Partnering between public and private sectors to develop representative data and research capabilities

Allowing for innovation while simultaneously developing safeguards and addressing potential risks

Thought Provoking Comments

Are we sure that this impressive revolution will be a progress? Not just innovation, not just power, but a progress for humankind.

speaker

Henri Verdier

reason

This comment frames AI development not just as technological advancement, but raises the crucial question of whether it will truly benefit humanity as a whole. It sets the tone for considering the broader implications and ethics of AI.

impact

This framing shifted the discussion towards considering the holistic impacts of AI on society, culture, and human progress, rather than just focusing on technical capabilities.

I’m liking AI to water. Water sort of nourishes us, it helps us, it helps us to grow our crops. In the same way, AI also helps us to be more efficient. It helps us to digest a lot of information and helps us to leapfrog.

speaker

Thelma Quaye

reason

This analogy provides a powerful and accessible way to understand both the potential benefits and risks of AI, especially from an African perspective. It highlights AI’s transformative potential while also acknowledging the need for proper governance.

impact

This comment brought attention to the specific context and needs of developing regions, leading to a discussion on inclusivity, infrastructure challenges, and the potential for AI to address developmental gaps.

We need to increase the infrastructure. We could say that we will leverage on other people’s infrastructure, which is what we are doing now, but I believe that it takes some sort of sovereignty from us.

speaker

Thelma Quaye

reason

This insight highlights the critical issue of digital sovereignty and the importance of local infrastructure for AI development, especially in the context of developing nations.

impact

It sparked a deeper conversation about data localization, infrastructure gaps, and the geopolitical implications of AI development and deployment.

We have most of the platforms and systems and technologies being used by majority of Africans not developed in Africa, which means data localization doesn’t really exist in context. It can exist on paper, but in reality, data localization is not working.

speaker

Leydon Shantseko

reason

This comment exposes a critical gap between policy intentions and practical realities in data governance, particularly in the African context.

impact

It led to a discussion on the challenges of implementing effective data governance and AI policies in regions that lack control over their technological infrastructure.

Of course, AI is not just a very promising technology. That’s also a source of power and an intense competition between companies, they want to take the lead. That’s also a geopolitical competition between models and that’s a competition between international organizations to take the lead.

speaker

Henri Verdier

reason

This comment brings attention to the often overlooked geopolitical and competitive aspects of AI development, framing it as not just a technological issue but a matter of global power dynamics.

impact

It broadened the discussion to include considerations of international relations, economic competition, and the potential for fragmentation in global AI governance.

Overall Assessment

These key comments shaped the discussion by broadening its scope from purely technical considerations to encompass ethical, geopolitical, and developmental aspects of AI. They highlighted the need for inclusive, globally coordinated governance that addresses the specific challenges of developing regions while also considering the competitive and power dynamics at play. The discussion evolved from abstract principles to concrete challenges in implementation, particularly around data sovereignty and infrastructure development. This multifaceted approach underscored the complexity of AI governance and the necessity for diverse perspectives in shaping global policies.

Follow-up Questions

How can we connect the various AI governance conversations and frameworks to ensure they work together coherently?

speaker

Melinda Claybaugh

explanation

There are many parallel conversations happening around AI safety, data privacy, copyright, and governance of different types of AI. Connecting these is important for developing comprehensive and effective AI governance.

How can AI governance frameworks reflect the realities of the AI value chain, particularly for open source AI?

speaker

Melinda Claybaugh

explanation

Different actors in the AI ecosystem have unique roles and responsibilities. Governance frameworks need to account for these differences, especially considering open source AI models.

How can we increase AI infrastructure, particularly data centers, in Africa?

speaker

Thelma Quaye

explanation

The lack of data centers in Africa compared to its population size limits the continent’s ability to leverage AI effectively and maintain data sovereignty.

How can we use AI to scale up technical and vocational education and training (TVET) skills in Africa?

speaker

Thelma Quaye

explanation

Using AI and virtual reality to create classrooms for learning technical skills could help address the skills gap in Africa.

How can we develop more African-specific AI datasets?

speaker

Thelma Quaye

explanation

Having AI trained on African datasets is crucial for ensuring AI tools are relevant and unbiased for African contexts.

How can African countries effectively govern data that is not hosted locally?

speaker

Leydon Shantseko

explanation

Many African governments and organizations use platforms hosted outside Africa, creating challenges for local data governance and regulation.

How can we ensure a fair balance between global data use and local data governance?

speaker

Leydon Shantseko

explanation

There’s a need to balance the benefits of global AI systems with local control and governance of data.

How can we increase AI adoption across industries, particularly in smaller companies?

speaker

Audrey Plonk

explanation

Current AI adoption rates are low, especially outside of large companies. Increasing adoption could lead to significant benefits but requires addressing issues of trust, safety, and accessibility.

How can we ensure public research can reproduce or preempt private sector AI developments?

speaker

Henri Verdier

explanation

There’s a growing gap between public and private AI research capabilities, which could limit common knowledge and public oversight of AI developments.

How can we develop partnerships between public and private sectors to advance common AI goals?

speaker

Melinda Claybaugh

explanation

Collaboration is needed in areas such as research capabilities and developing representative global datasets for AI training.

How can we improve the enforcement of AI governance policies, particularly in Africa?

speaker

Thelma Quaye

explanation

There’s a need for effective enforcement mechanisms to ensure AI governance policies have real impact.

How can we better involve youth in AI governance discussions and decision-making processes?

speaker

Leydon Shantseko

explanation

Youth perspectives are often left out of governance processes, despite young people being at the forefront of many AI innovations.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #14 Children in the Metaverse

Session at a Glance

Summary

This discussion focused on children’s rights and safety in the metaverse and other virtual environments. Experts from various fields explored the challenges and opportunities presented by these emerging technologies. The conversation highlighted that children are already active users of virtual worlds, with over 50% of metaverse users being under 13 years old. Participants discussed the need for age verification and data protection measures, balancing these with data minimization principles.

The discussion touched on existing regulations such the GDPR and their applicability to virtual environments, while noting the need for more specific governance frameworks for the metaverse. Experts emphasized the importance of involving children in the development of policies and technologies that affect them, as well as the need for child-friendly reporting mechanisms and effective remedies in virtual spaces.

The potential benefits of the metaverse for children’s education, creativity, and advocacy were highlighted, alongside concerns about privacy, safety, and potential exploitation. Participants stressed the importance of digital literacy for both children and adults. The discussion also covered the role of parents and educators in supporting children’s safe engagement with virtual environments.

The Global Digital Compact was presented as a framework for shaping the digital environment, with a focus on protecting children’s rights online. Overall, the discussion emphasized the need for a balanced approach that protects children while also empowering them to benefit from the opportunities presented by virtual worlds and emerging technologies.

Keypoints

Major discussion points:

– The metaverse and virtual worlds are already inhabited by many children, raising concerns about safety and rights

– There are challenges around age verification, data collection, and protecting children while allowing participation

– Existing regulations such as the GDPR provide some guidance, but there are gaps in regulating virtual reality environments

– Children want to be empowered and involved in shaping policies for the metaverse, not just protected

– The metaverse offers opportunities for education and child advocacy if developed responsibly

The overall purpose of the discussion was to explore how children’s rights and safety can be ensured in virtual reality and metaverse environments, while also leveraging the opportunities these technologies offer for children’s development and participation.

The tone of the discussion was primarily serious and concerned about child protection, but became more optimistic towards the end when discussing the potential benefits and children’s desire to be involved. There was a mix of caution about risks and enthusiasm about possibilities throughout.

Speakers

– Jutta Croll, Chairwoman of the Digital Opportunities Foundation

– Michael Barngrover, Managing Director of XR4Europe

– Deepak Tewari: Founder of Privately, a technology company focused on online child safety

– Sophie Pohle: Advisor at the German Children’s Fund (Deutsches Kinderhilfswerk e.V.)

– Lhajoui Maryem: Projects and Network Lead at the Digital Child Rights Foundation

– Emma Day, Nonresident fellow at the Atlantic Council’s Digital Forensic Research Lab and a human rights lawyer

– Deepali Liberhan, Global Director of Safety Policy, Meta

– Torsten Krause, Project Consultant at the Children’s Rights in the Digital World

– Hazel Bitana: Deputy Regional Executive Director at the Child Rights Coalition Asia

Full session report

The Nature and Scope of Virtual Worlds and the Metaverse

The discussion began with Michael Barngrover, Managing Director of XR4Europe, exploring the broad nature of virtual worlds. He highlighted at least four relevant concepts: virtual reality (VR), mixed reality, 3D gaming, and social media platforms. Barngrover also noted that social media platforms are already functioning as virtual worlds based on user behavior, and discussed the cognitive load of multi-presence in these environments.

Deepak Tewari, founder of Privately, provided concrete statistics, noting that the metaverse currently has 600 million monthly active users, with 51% under the age of 16 and 31% under 13. This revelation underscored the urgency of addressing children’s rights and safety in virtual environments.

Children’s Rights and Safety in Virtual Environments

Sophie Pohle, representing the German Children’s Fund, referenced General Comment 25 as a framework for children’s rights in digital environments, establishing a legal and ethical foundation for the conversation.

Lhajoui Maryem from the Digital Child Rights Foundation emphasized the need for age-appropriate content and safer digital experiences for children. Hazel Bitana from the Child Rights Coalition Asia presented children’s perspectives from the Asia-Pacific region, stressing the importance of child-friendly reporting mechanisms and effective remedies in virtual spaces. She also highlighted children’s desire to be involved in policymaking processes.

Deepali Liberhan outlined Meta’s approach, explaining that the company implements default privacy settings, parental supervision tools, and third-party age verification through YOTI for teens. However, this platform-specific approach was contrasted with calls for broader, more comprehensive safeguards across all virtual environments.

Data Collection, Privacy Concerns, and Age Verification

Emma Day, a human rights lawyer and artificial intelligence ethics specialist, highlighted the unprecedented amount of sensitive data collected in the metaverse as a significant concern. The discussion revealed tensions between effective age verification and data privacy.

Deepak Tewari presented on privacy-preserving age detection technology, arguing that such solutions already exist but face pushback. He suggested implementing age verification at the device or operating system level to minimize data collection. This contrasted with Liberhan’s emphasis on balancing age verification with data minimization principles, highlighting the complexity of implementing safety measures without compromising user privacy.

Regulation and Governance of Virtual Environments

The applicability of existing regulations like GDPR to virtual environments was discussed, with Emma Day noting that while these regulations apply, new challenges emerge in the metaverse context. She advocated for a multi-stakeholder approach to developing governance frameworks.

Torsten Krause mentioned the Global Digital Compact as a framework for protecting children’s rights in digital spaces. He elaborated on its principles and potential impact on international cooperation for digital governance. Krause also emphasized the responsibility of states in implementing child safety policies and standards.

The discussion touched on the Australian law preventing children under 16 from using social media, sparking debate about the effectiveness and implications of such stringent measures.

Opportunities and Risks of the Metaverse for Children

The potential benefits and risks of the metaverse for children were explored. While some speakers highlighted educational opportunities through immersive experiences, audience members raised concerns about addiction and disconnection from reality. Hazel Bitana provided a balanced perspective, acknowledging the potential for creative expression while also warning about the risk of deepening discrimination in virtual spaces.

The importance of avatars in virtual worlds was discussed, highlighting their role in self-expression and identity formation for children and teens.

Unresolved Issues and Future Directions

Several key issues remained unresolved, including effective age verification without excessive data collection, appropriate consequences for virtual harms, and closing digital divides in access to metaverse technologies. The ongoing challenge of balancing innovation with protection in metaverse regulation was emphasized.

Jutta Croll mentioned an upcoming session on age-aware internet of things, while Emma Day noted a panel on governance of edtech, neurotech, and fintech, indicating the breadth of related topics to be explored.

Conclusion

The discussion underscored the complex nature of children’s rights and safety in virtual environments. While there was broad agreement on the importance of protecting children, significant differences emerged in approaches to implementation. The conversation highlighted the need for continued dialogue, research, and multi-stakeholder collaboration to develop effective governance frameworks that balance safety, privacy, and children’s right to participation in the evolving digital landscape.

Session Transcript

Jutta Croll: Welcome to our workshop, Children in the Metaverse. My name is Jutta Kroll, I am the chairwoman of the Digital Opportunities Foundation. Welcome to those people who are on site in Riyadh, and also to those who are taking part in our session online. As I said before, my name is Jutta Kroll, I’m chairwoman of the German Digital Opportunities Foundation, and I prepared the workshop proposal together with my colleague, Torsten Traude, and with Peter Josius from the Netherlands. Thank you for being here. Yes, just to set the scene, with the availability of artificial intelligence and the creation of virtual worlds, the immersion of digital media in everyday life has reached a new level of development, although the concept of virtual reality dates 35 years back, like the World Wide Web and like the Children’s Convention. So we have kind of a coincidence of certain developments, but nonetheless, virtual reality has only come into our lives during the last several years. And what we already know from 20 years of children using digital devices, being online in the internet, they are the early adopters, and they are those who will also be the first inhabitants of the virtual environment. So that is why we have come together to consider how children’s rights can be ensured in virtual reality in the metaverse, and I’m really happy to have those esteemed speakers around me and also online. I will introduce the speakers to you once it’s their turn to speak, and we will begin with is Michael Barngrover, he’s already to be seen in the Zoom, you can see him. Michael is the Managing Director of XR4Europe, which is an industrial association with the mission to strengthen collaboration and integration between XR industry stakeholders across the European continent. So he’s coming from the technical part, but I know that he has children’s rights on his mind, and I will hand over to you, Michael, your screens are already to be, your slides are already to be seen and just start your presentation. Thank you.

Michael Barngrover: Excellent. Thank you very much. Just to confirm that, that everyone hears me all right. Good. Yep. Okay. Then yes, I’m the Managing Director of XR4Europe, and we are an association that supports those who work with XR all over the continent of Europe, both researchers as well as companies and policymakers, helping and hoping to make the future of virtual worlds in Europe, one that we want to have happen, that we want future generations to be healthy and productive. And so we can go to the next slide, I’d like to focus on the scope. So scoping virtual worlds. So the next slide, is that something that I don’t have control over doing? No, I do not. Can you move next slide? Yeah, thank you. Okay. So with virtual worlds, it is a challenging term, because it’s very broad and very encompassing. So there are at least four broad concepts of virtual worlds that I think are very relevant. The first one being VR, a virtual reality. So you’re completely immersed, taking place and sharing place, often with others in a digital environment, a fully digital environment. But of course, it’s not fully digital, because you are there, and you have both a digital and a non digital existence. And we have this mixed reality, which is what is becoming more prominent, starting last year with new devices, new headsets that can do this. And almost every device going forward, is going to be increasingly capable of mixed reality, which brings the digital and the non digital together in an integrated experience, which means that in the same way that you are present in this digital environment, this digital medium, you are still present in your physical interactively. So you’re having to interact with both. But both of these are built off of another kind of virtual world, which is much more mature and much more common, which is basically traditional 3D gaming, and even 2D gaming. So these worlds like this image here of Fortnite, these are places where hundreds of millions of users, billions of users are actively engaged, including a lot of children. So this is where the virtual world starts to extend beyond what we think about the future and really what we’re actually concerned about today right now. So the lessons that we can draw from what makes these environments safe and healthy, we should be able to draw lessons from what we’re trying to do today to future virtual worlds that would be mixed reality and virtual reality. But even these 3D worlds follow a lot of the same experiences as 2D and non-visually represented or non-immersive, non-3D. environments, like even social media platforms. Social media platforms are virtual worlds, as in that they are populated, they are active, they are interactive, but they are virtual. They don’t have a direct correlate with the physical world, except through us, the users. Can I go to the next slide, please? Because in the next slide, we’re going to talk about, sorry, it’s back a couple of slides. It’s the cognitive loads of multi-presence. So a couple of slides before this one. So there is a cognitive load to be thinking about and managing your activities and your presence, and thus the activities and presence of others in multiple worlds, the non-digital world, but also the digital virtual worlds. And when we have many virtual worlds that we are active in, then that is an additional load. So already this is, again, something we’re doing right now with social media platforms, but it’s also true of 3D platforms that are persistent. The traditional gaming platforms are always there. Our friends may be there. When we’re not there, these worlds are still active, and we are still maybe thinking about them, particularly children who generate a lot of their social equity through their activities in these platforms. As these become 3D immersive spaces, it’s actually even more challenging. There’s more that we have to think about. We have to think about, as we move around in this space, the gist of the actual physical space as it maps and correlates to a virtual space. For example, a mixed reality environment. The table or the chair, those things are in the virtual environment, whether they are represented there or not. They’re in my physical space, so they’re also in my virtual space. even if they’re not virtually represented there. So even from a safety perspective, you have to manage these things. But when we start putting more people in there with us, that becomes more complicated as well. So cognitive load for everyone, but especially children, is something really to be concerned about. Can we go to the next slide, please? The avatar slide. So when we’re talking about being in these worlds with each other, yes, we have this question about how we’re represented, how we’re viewed there. So avatars are a really important topic. I’ll go a little quicker just because we’re getting, I think, a little over time for my part. But the avatar here is just to say that avatars come in many different forms, and it’s not possible for any one company to provide you all the ranges of representation here, whether cultural or even age and ideology. So there’s a big question between whether people should be able to make their own avatars and bring them into these virtual worlds, or whether they should limit themselves to the options that are provided to them. And these, of course, have tremendous consequences, particularly for younger people, as they start to spend more time building social equity while using avatars to represent themselves with others. And it’s not just their choice. When they choose an avatar, it does not mean that that’s what they are. They have to then still negotiate with others, with peers to establish what their identity is in that social environment. So avatars are just a tool towards establishing identity in a virtual world. And then the next slide, please. Just very quickly, once we’re in there and we’re represented, sorry, a couple slides back. There’s a slide about policing, but basically, once we’re in this environment, it is still something of a free-for-all environment in virtual worlds, because we do not have something keen to policing or criminal justice. So when crimes happen in these virtual environments, right now, it’s very hard to arrive at a consequence, an acceptable consequence for them, but it’s also very difficult to know what is a suitable consequence. A virtual crime is a crime by definition, but we don’t know how much it’s equivalent to the similar or same crime that may take place outside of virtual realm. And this is, again, not new to virtual worlds. If we consider social media. platforms to be a form of virtual world, then this is something that already we’re trying to legislate and understand. So I’ll close it there, so we can move on to the next speaker, but thank you very much.

Jutta Croll: Thank you, Michael, for giving us a first insight into what virtual worlds will be, can be, or are already. We will now go to Deepak Tewari from Privately, who will tell us a little bit about technology to guide us through the metaverse. Deepali, I can see, sorry, but I can see everyone and I can secure it. Okay, we can hear you, and now we can also see you. Would you please introduce yourself, and then we will have a look whether your slides will work, because there are some technical issues. It’s the first day of the Internet Governance Forum, and we are in a really busy, busy room, and the technicians are doing a great job, but sometimes things go wrong. We will try. It’s all right. Even if there are no slides, I will be able to conduct myself,

Deepak Tewari: I’m very happy to be here addressing all of you. I run a company based out of Lausanne in Switzerland, and the company is called Privately. Privately is a technology company, and for the last decade, we have been developing and deploying various technologies to keep miners safe online, and this includes technology that identifies threats and risks online, but also technology the technology of age assurance and age assurance is essentially the, the technology behind being able to identify whether someone is a child online. And hopefully I could get my slides up and I would, I would have really wanted to show you how it’s working and, and what it is like but going back to the subject at hand the metaverse. So my previous speaker he was mentioning about avatars and the state of the metaverse or virtual worlds in general. And that’s essentially where I’m calling in from I’m sitting very close to that lake that you see here and it’s very beautiful. So, very happy to be here. And if you could kindly go to the next slide. So, this summarizes what privately does so we have various kinds of technology, safeguarding privacy enhancing technology but something which is very pertinent to today’s discussion is the, the technology behind being able to identify who is a participant online is a child, and especially in the metaverse is a child. And can we do this. So that’s, that’s something I’m going to talk about but as I said let’s back up about the metaverse a couple of notes that I took. So, as of the latest statistics today. What roughly categorizes as the metaverse has about 600 monthly active users 600 million. So that’s roughly 600 million monthly active users. And according to some reports that I have seen, particularly from wired and another agency called data reg 51% of the active users in the metaverse are children. of the users of the metaverse, so-called metaverse, are under the age of 13. And 84% are under the age of 18. And you might ask me why that is the case, because most of what we know as the metaverse or virtual world is made of gaming environments, Roblox, Minecraft, Fortnite, which explains why such a large number of, you know, the participants in there are actually miners. One more interesting piece of data, 31% of US adults have never heard the term metaverse. So that just tells you how much is the divide, and essentially this virtual world that we are talking about is actually full of children. And this has been our experience as well, developing technology, which takes me to the next slide, please. If, yes, I don’t know if you can get this, yes, if you can get this working, then I’ll let you see it, and then I’ll comment after. There is some, there is a volume behind it, if you can see it. Well, if the volume is not playing, I’ll tell you, first of all, can we get the audio playing on that one? It’s playing, but it’s very faint. Okay, so let me tell you what’s happening. So first an adult, so this is actually, this is our technology at work inside a, it’s a VR headset, and first an adult is speaking. And when the VR headset detects that it’s the adult which is speaking, it gives them access to adult content. As you saw that there was, the advertisements were meant for adults. But after that, if the audio was playing, you would now hear a child speaking. And as soon as the technology detects that it’s a child which is speaking, the environment has changed. And the only content, and in this case, it is advertising. The advertising that is, that is. shown to the participant is only related to children. So what I wanted to illustrate out of this, you will see this time it is the child. So it goes inside, you’ll see all child-related content. The point we are trying to make here, well, there is a QR code here, so you could actually watch this video on YouTube directly as well. The point we are trying to make here is that technology exists today from companies like Privately, which have developed fully privacy-preserving, GDPR-compliant solutions for detecting age of users. And as you can imagine, the detection of the age of users is the cornerstone to keeping users safe online. And this is the demo that you just saw. We actually had it implemented for a very, very large social media network. It was in trial. Of course, they decided not to pursue it, but for reasons best known to them. But as you can imagine that the technology today exists, and we are seeing more and more of these technologies which are privacy-preserving, run in the background, but are able to differentiate in real time, just like as we, as adults do. As human beings, we do that this is a child, this is an adult, and they can do so in the background. And in doing so, then they can ensure that the platform or the service or the virtual environment is able to deliver an age-appropriate experience. So this is essentially my message. If you need to know more about the tech, that’s the QR codes that I have left behind. You could very happily either contact me or look at the codes here at the showroom. You can even test the technologies. So thank you for your time. And that’s me. That’s over now.

Jutta Croll: Thank you, Deepak. Thank you so much for giving us an insight into also how children or maybe all users might be protected in virtual environments. We have now about seven to eight minutes for questions, either from the floor here inside or from our online participants. Do we have any people who want to come in with their questions? Michael and Deepak will be there to answer your technology-related questions. or whether you have any further questions so far. Yes, we have a hand raised. You will need to take a microphone.

Audience: Hi, my name is Martina from Slovakia. I’d like to ask you for this solution for the privacy of the children. Is it not true that the child’s voice will change to adult’s voice?

Jutta Croll: Yes, please Deepak, go ahead.

Deepak Tewari: So, look, this is a real time. Maybe to clarify, this is not happening on a one day basis. This will become a feature of the microphone. Each time you’re speaking, the microphone can detect whether it’s a child or an adult. So it’s not like there is a one time check done and you are categorized as an adult or a child. This is real time, this is continuous, and this is a feature of the device itself. Very much in a similar way, we’ve also created age aware cameras. If you go into shops, the camera looks at you, it detects if it’s an underage person. They will not serve you alcohol or any restricted item. So this is a feature, you have to think of it as being continuous and not done once forever or for a long time.

Jutta Croll: So I see she’s nodding, so the answer was accepted. It gives me the opportunity, as you’ve been speaking about age awareness, that we will have another session on Wednesday at 10.30. Yes, at 10.30 on age aware internet of things. So probably we’ll put that in your schedule. We have another question from the floor. The microphone.

Audience: What if the adult is using an AI to convert his voice from an adult to a child, to deceive the program and get into the child’s room?

Deepak Tewari: Yes, that is correct. That is, you’re right, there is a threat because these days there are artificial programs and generative AI, which is attacked. You have to go into a little bit of a depth into this. There are two kinds of attacks, one which is called a presentation attack, which is, you know, you use an external program, you show, you play a child’s voice, for example. That’s one way. The other one is you actually inject a child’s voice into the program, which is a little bit more difficult. It’s called an injection attack. So there are, obviously there are technologies to detect both, and you could always argue that it’s a battle between how good the technology to detect is vis-a-vis the technology which is trying to fool or spoof. But an interesting thing here is that the fact that this is continuous, at some point, you know, in some use cases, we’ve seen that the technology is being used to detect anomalies. So the same person is talking like a child, a child, and he’s talking like an adult to someone else. That produces an anomaly in the system. And that could be used to track that there is a person who’s probably malified in that group. So there are ways and means to detect these things. But like you rightly said, there’s a contest between technologies. But solutions exist, by the way, to detect spoof voices and, you know, injection attacks.

Jutta Croll: Thank you, Deepak. We have time for one more question, and then we go to our colleague, Sophie. Thank you, Deepak.

Audience: This is a very important work, especially in a world where governments are now contemplating where children under 16 even should have access to platforms or not. So you talked about working with some big platforms, but of course, is there ownership of such kind of work? At a more strategic level, you know, is there any ownership of this kind of work? Because this means less consumers of their content. And also, because of the way you register for these platforms, these big tech platforms, you can just say under 16, they don’t give you a voice or anything that you can detect. So is there any more work by privately that you can talk about how to stop the exploitation of children by big tech platforms?

Deepak Tewari: Luke, thank you for your question. It is true that I have to say that we are seeing a big pushback from big platforms, big tech. And I hate to say this, but it is because age assurance comes as a threat to their business model. If you imagine you’re advertising to everyone and saying, I’m only showing these ads to adults, but a part of your users are kids. So if you just follow the money, then this is a misappropriation of advertising revenues. Minus all the morality and minus all the child safety here. So it is not in the interest of big tech to support age verification, which is why you see stalling and fear, uncertainty and doubt being sown by pretty much all the big tech platforms. We are actually even fighting a case in the Supreme Court in the U.S. right now. So we are active everywhere. The technology exists, but big tech does not want to do it directly. And it’s always about liability. Who has the liability for this? As long as that is not settled and as long as there are no strict fines, unfortunately, big tech will be pushing this out a little bit. So we as a small company, we are trying our our bits, challenging in court, doing thought leadership, also going into games, building it and showing people that it’s privacy preserving. It works. It’s functional. We have certified all of the technology publicly. But I have to say that there is a business model contradiction with the with big tech. So, yeah, then there is a problem.

Jutta Croll: We already have big tech also in the room. So we’ll go to that question afterwards. But first, I would like to refer to the lady from the floor who referenced children’s rights and their right to access to to digital media, to digital communication. And that’s the point when Sophie Pohler from the German Children’s Fund come into play and she will give. us a short introduction on the general command number 25. For those of you who have not heard about that, we have the document printed out here and when you leave the room you can take it and also you will find it on the website www.childrens-rights.digital. Yes, thank you. Sophie, over to you please.

Sophie Pohle: Thank you, Jutta. I hope everyone can hear me. So, hello and welcome from cold and rainy Germany, from Berlin. I’m Sophie Pohler. I’m from the German Children’s Fund, which is a children’s rights organization in Germany here and we have been collaborating with the Digital Opportunities Foundation for years, including the joint coordination of Germany’s consultation process for the GC25, the general command number 25. And now I’ll set aside my role as an online moderator today of our session to give you a brief overview of the general command number 25 as a framework for our discussion. So, let’s start. The UN Committee on the Rights of the Child published the general command number 25 in 2021, so three and a half years ago, and it’s focusing on children’s rights in the digital environment. This document guides state parties on implementing the Convention on the Rights of the Child in digital contexts. It was developed with input from governments, from experts, and also from children and offers a practical framework to ensure comprehensive and effective measures are in place. Our session today explores the metaverse, a topic which is not directly named in general command number 25. However, the GC highlights that digital innovations significantly influence children’s lives and rights, even when they do not directly engage with the Internet and by ensuring meaningful, safe and equitable access to digital technologies, GC25 aims to empower children to fully realize their civil, political, cultural, economic and social rights in an evolving digital landscape. Let me briefly introduce the four general principles of the GC25, starting with non-discrimination, which emphasizes ensuring equal access and protection for all children. Second, we have the best interests of the child, prioritizing children’s well-being in the design, regulation and governance of digital environments, including of course virtual worlds or the Metaverse. Thirdly, we have the right to life and development, that means to ensure digital spaces support children’s holistic growth and development. And last but not least, we have the principle of respect for the views of the child, which means to consider children’s perspectives in digital policymaking and platform design. What are the central statements or key emphasis of the GC25? The general command recognizes children’s evolving capacities and how user behavior varies with age. It highlights opportunities and different levels of protection needs based on age and also stresses the responsibility of platforms to offer age-appropriate services for children. The GC25 calls on the states’ parties to support children’s rights. to access information and education using digital technologies. It also urges to ensure that digital technologies enable children’s participation at local, national and international levels and highlights the importance of child-centric design, which means to integrate children’s rights into the development of digital environments, for example with age-appropriate features, content moderation, easy accessible reporting or blocking functions and so on. On the one hand we have those opportunities and participation calls and on the other hand we also have the GC25 calling the states parties to identify risks and threats for children incorporating their perspectives and it also calls for solid regulatory frameworks to establish clear standards and international norms on the one hand and to implement legal and administrative measures to protect children from digital violence, exploitation and abuse. Also GC25 encourages cooperation among stakeholders like governments, industry and civil society to tackle dynamic challenges and last but not least it underlines to promote digital literacy and safety awareness for children, parents and educators to ensure informed participation. In my last minute I’d like to give a very brief insight into children’s perspectives that were collected during the consultation process on GC25. The principles I laid out in GC25 are directly informed by the needs and expectations children have voiced globally during the consultation. consultation process. I think it was more than 700 children globally that were consulted. And yeah, to conclude, I brought some key insights on a very general level from young people on how we can better support them in the digital world before Mariam after me will take us through the children’s perspectives in way more detail. So what do children want? They want equitable, reliable access to digital technology and connectivity. They also wish for age-appropriate content and safer digital experiences where platforms protect them against harassment, against discrimination, against aggression, and rather enable them to participate and express themselves freely. Children themselves demand greater privacy and transparency about data collection practices. And they want also more digital literacy education, also for their parents, by the way. And they also want the recognition of their right to play and leisure, which is also crucial when we talk about the metaverse. So much for now. I think my time is up. That was on a very general level. Yeah, thanks a lot for your attention. And I’m happy to answer questions.

Jutta Croll: Thank you, Sophie. Thank you so much. Welcome. Can you hear me? Okay, it’s working. Thank you so much for already touching upon digital literacy because we will also come to that point to discuss how the metaverse will open up opportunities, huge opportunities for training of digital literacy for children as well as for their parents and other adults. But first, we go to Mariam. Mariam, also, she is a very young person. She holds a bachelor’s degree and a master’s degree. So I’m really impressed about you. But you’re representing youth in our panel. And please go ahead. Your slide is already on the screen.

Lhajoui Maryem: Thank you. Can everyone hear me? Yes. So before I start, thank you, Sophie. And I’m very happy to see so many people in the room, actually. Thank you for being here. So my name is Mariam, and I’m here on behalf of Digital Child Rights Foundation. And so we are a youth platform, expertise center based in the Netherlands, but we focus internationally. So what we do at Digital Child Rights, our main goals are to really have the importance of digital child rights and to really have them included children’s opinions in, you know, what is safe for them and to include their opinions. We do this through different ways. We create playful tools for children. We also gather most of their information on how they view things through our child rights panel. And then for youth, we have many youth ambassadors in the Netherlands, but also in different other countries. And so we really want to give youth a platform to connect through connection challenges. So Sophie, thank you again. A very clear overview of the general comment 25. You might see some overlap here. So at Digital Child Rights, these are our 10 themes, and they are based on, yeah, they’re based off the general comment 25. So while the metaverse that we’re talking about offers great opportunities for children, for youth to actively participate, there’s many chances in ways we didn’t know before, right? At the same time, we also have to remain critical and acknowledge that there’s challenges, right? We have to prioritize safety and privacy and fairness in the best interest of the children, and that’s what we really focus on. So when it comes to privacy, it was also already mentioned before, who’s collecting my data, where is my data collection going, how old am I? Safety, also addressed very clearly by my colleague Deepa. So when it comes to age verification, can I pretend to be way older than I am, or can I pretend to be way younger than I actually am? And so we know the dangers that come along with it, but what is actual action that should be taken there, and how do children even, how do they view this? Do they even know that it’s possible to, you know, to meet other people online that might not have the best interest, unfortunately? So this is where the rules are very important, right? So there’s rules, also as outlined in the general comments, like Sophie explained, so how do they view it, and what are we going to do about this? And then also really important is this, at Digital Child Rights, and in many others, I also heard it, is this digital inclusion, right? So can everyone participate? So we’re based in the Netherlands, we speak with a lot of youth in the Netherlands, and but it’s different when you look at other countries, so can everyone participate? And at Digital Child Rights, we actually conducted some interviews at Gaza Camp, Jordan, this year, where we spoke to Palestinian refugees in Jordan, and about access to internet, access to the metaverse, what are opportunities for them in the metaverse, what’s important to them? So this digital inclusion is very important, especially when it’s so closely interlinked with education, so when education plays a very role. And then also very important, we write it as an opinion, is can everyone give their opinion online? Are children free to say what they want? and also be able to do that within the frameworks of do not bully, non-discriminatory, as outlined in the general comment. So very important that it’s equal and that they can give their opinion, which we also really, we gather a lot of their opinion. And so, yeah, at Digital Child Rights, we’re always looking to connect with other youth platforms. So I do invite all of you that are sitting here to talk to me afterwards about what are opportunities to enhance this, to really strengthen the voice of the youth and children, not only in this room and not only in the Netherlands or in Europe. And that’s why also we are here. And I would love to meet so many of you. So the question for us is, what can we do in the best interest of children? And how do we really include them in it? So I hope to talk to many of you after this.

Jutta Croll: Yes, but we also have now about 10 minutes for you to take questions as well as for Sophie. So do we have any questions here in the room or otherwise in the online chat? Anyone who wants to raise their hand? Yes, okay, Emma, please. It should work.

Audience: Thank you for the presentation. So I think the question was for me, right?

Lhajoui Maryem: Yes, thank you, Emma, for your question. So that’s a really good question. We should always ask ourselves how much knowledge is already present in the room when speaking about these important issues. So when it comes to children, we create playful tools to connect with them. So there is like, what do you call it, Kofta? Like a card game and my colleague is holding it. Perfect. So there is a card game. It’s a playful way, but to really also talk about what are the, so what’s connected to, can everybody participate? What does that even mean? And also, so for example, with safety, okay, what does safety even mean? And then it’s also, yeah, so it’s like, we also wrote it here. You can take a look at it so that you’re aware that you can ask for help when you’re in danger and that there is many opportunities for you, chances. So there is a card game. We also offer different kinds of workshops. So there’s also a workshop with masks. So where children make their own mask. And this also really portrays, right, this age verification and also really, as outlined in the previous presentation by Michael, I think, yeah, about the avatar. So who am I even in the online world? Is it also a different person than I am in the real world? So we really encourage them to think about this before they then put their opinions. And then still, there’s so much to learn. And through these playful tools and also through these connection challenges that they do with other platforms, we encourage them to learn more. And then we can learn more from them. And then they can also give better opinions. So I hope that answered your question, Emma.

Jutta Croll: We have another question here. And then I would also turn to Sophie to tell a little bit, if you’re able to do so, about the children’s participation for general command number 25, which pretty much refers to what you have said. So we take those two questions there. Yes.

Audience: I’m the youth ambassador from Hong Kong, from the One Path Foundation. And I’m here, I want to ask a question that there are lots of attractions in the metaverse. And how can you prevent children from addicted to those attractions and cut off from the reality?

Jutta Croll: Okay, it’s a question with regard to addiction to the metaverse. You’re talking about addiction to the metaverse, right? Yes. Okay, I’m not sure whether this question should go to the panel or whether we can go to it afterwards. Who would you want to pose the question to, Mariam? Okay, go ahead.

Audience: Because just now she said that there are lots of playful tools.

Lhajoui Maryem: question, actually. So there is many, there’s many things in the online world. And I must admit that I also get maybe a bit addicted sometimes. And maybe I can get a bit lost, you know, when you’re scrolling. So what can we do? This is actually also a question for you and for me at the same time. So what can we do to help each other and help our friends that are the same age to really not get lost in that? So what we at Digital Child Rights do is we, so like I said in the beginning, the metaphors, yeah, we can get a bit lost in it. But it’s also that, you know, we’re also going with the time. So it is also a place where there’s many great opportunities as long as we know how to handle it, right? So we really try to make young people aware of the dangers, yeah, the challenges and also the nice things. So we really try to tell you, if anything happens, then there’s always some kind of help offered. And that you are really aware that you have the right to be safe, right? Because Winston, if I tell you, yeah, you should not spend any time on your phone. That’s a bit crazy, right? Maybe. I don’t know. If you want. But we need to regulate it. We can’t spend too much time. But it’s also interlinked. You can learn a lot. There’s also, it can help you with your education. So to answer your question, I’m sorry.

Audience: Hi. I’m from India. I just wanted to ask Mariam and Sophie both. She talked about digital literacy. You’re talking about the programs with children. How do you involve parents and educators when it comes to children in metaverse? Because that’s crucial to have them on board when we talk about children’s engagement in the online spaces.

Jutta Croll: I’ll give that question to Sophie. But only a short answer, Sophie, please. Because we are running out of time.

Sophie Pohle: Yeah, that’s a question we also discussed a lot in Germany for a few years and I think the key is to involve the schools more to reach every child because every child goes to school and there we have the parents too. So that’s key I think and we also need to think about how can we reach those parents who are not already sensitive to these topics because often we see that parents inform themselves when they see there’s a problem but we have a lot of parents also that do not have access to this information and who do not have the resources and we do need to think about more how to involve them and to get to them directly. It’s a very complex question to be honest, very difficult for a short answer. Okay, we will follow up with that as well.

Audience: Hello, it’s not a question, it’s a proposal to the Digital Child Rights Foundation. So I am Sadat Rahman from Bangladesh. We are also working for teenagers in Bangladesh and in Bangladesh we have a helpline 13 to 19 is a cyber teens helpline if in any Bangladeshi teenager facing cyber bullying, cyber harassment, they can call us and we are working with Netherlands also in Child Helpline International and I received the International Children Peace Prize 2020 from Kissright. So I would like to work with Digital Child Rights Foundation. We need mentors. Thank you.

Jutta Croll: Thank you so much. So we leave it at that time because we have already our next speaker on my left side and Hazel is also in the room, in the Zoom room, but we will start with Emma Day. She is a human rights lawyer and an artificial intelligence ethics specialist. Also she’s the founder of the consulting company specializing in human rights and technology for UN agencies. Emma, over to you because now we want to talk about regulations and when we set up that virtual proposal, the Australian law keeping children under the age of 16 was not enforced, even not on the debate. So now we have a different situation but I’m pretty sure you will be able to address it.

Emma Day: We specialize in human rights and technology and I’m going to talk to you a bit about the existing legal regulatory place in the metaverse and maybe where some of the gaps might be. So the metaverse is an evolving space and some of it we’re talking about is a little bit futuristic and may not actually exist yet, but we’re talking about a kind of virtual space where unprecedented amounts of data is collected from child users and adult users as well. So already today we have apps and we have websites which collect lots of data about users, about where they’re going online, how they’re navigating their apps, but in the metaverse companies can collect much larger volumes of data and much more sensitive data. So things like users, physiological responses, their movements and potentially their brainwave patterns even, which may give companies a much deeper insight into users’ thought patterns and behaviors and then these can be used to really target people with marketing, to track people for commercial surveillance or even shared with governments for government surveillance. So it’s something that takes us to another level when we’re thinking about data governance. Then we have people’s behavior in the metaverse, both children and adults. If someone says something in the metaverse space, is it the same as posting content online? What laws should apply there? Who is liable for content they post online? If I use an avatar online, am I responsible for their speech? And if my avatar abuses somebody who’s wearing some kind of haptic device that they can feel the touch to them, then how do we deal with that? So there are some questions that we don’t really have answers to from regulators, but there are some regulations that we know apply. So for example, the GDPR wouldn’t still apply in the metaverse. So this is the European regulation around data protection and there are many laws around the world now to fight against the GDPR, which is used for data protection for business. And that means also that the children’s codes, which have been developed like the UK age appropriate design codes and similar codes in other countries, which is guidance on how the GDPR applies to children, would also apply. But it may be difficult within data protection law to determine who is the data controller and who is the data processor. So a data controller is the entity responsible for deciding how the data is going to be used for what purposes. And they then ultimately are the most liable and accountable for that. But if you have lots of different actors in the metaverse space who are sharing data between them, it may become quite confusing. And then the data controller, before they process data, particularly from children, they should be telling them, giving them a privacy notice. So how many privacy notices can you have in different parts of the metaverse? And say you’re in an experience, maybe a child is walking along a street in a virtual town and they stop in front of a bakery and they’re looking in the bakery window. Then maybe one of the companies involved can see that maybe they are hungry and they can target them with some food advertising because they stopped in front of that bakery. So it’s kind of a different scenario to the way we use websites and apps currently. And then, of course, there’s this question of how to determine which users are children. And sometimes not even which users are children and which are adults, but precisely how old is that child for data processing purposes. Then you also have a body of regulations which are about online safety. So we have online safety acts. We have this law in Australia which is preventing children under 16 from using social media, which maybe they would apply also to the metaverse. But then you have in the US, there is a section 230 of the Communications Decency Act, which you may have heard of, which really provides online platforms with immunity from third-party content, from liability for that. So that may change in the US as the metaverse develops. This is a very political topic in the US. We don’t really know what direction that will go in. So we have a very diverse global regulatory framework and, in fact, not a lot of very specific regulations and not a lot of enforcement currently. But I think that the main takeaway for me would be that the common thread globally is human rights and children’s rights. And what we do have is the UN Guiding Principles on Business and Human Rights, which were endorsed by the Human Rights Council in 2011. And these are the global authoritative standard for preventing and addressing human rights harms connected to business activity. And they’re aimed at both states and business enterprises. So they call on states to put in measures to protect children’s rights, including in the digital environment, and also for businesses to respect children’s rights. And this includes tech companies. And when we think about tech companies, we need to think about also the safety tech and the age assurance tech. These are also tech companies. And so both the platform who is providing the metaverse and also any technologies used within that platform need to carry out risk assessments, where they look at all of the different rights that could be impacted for both children and adults. And in a mixed audience, you need to look at both children and adults and make sure that all of the stakeholders are engaged with and consulted with. And so that something that is introduced to protect children doesn’t then have a adverse impact on other rights, so we need to make sure that all of the rights online are protected, and the UN Guiding Principles provide a methodology for stakeholder engagement, for risk assessment, and then for identifying how to mitigate those risks in accordance with children’s rights and human rights. So I think that if tech companies carry out their human rights due diligence, do their child rights and human rights impact assessments, then they should be in good shape, in fact, to comply with regulations that may be coming down the line. I will leave it there.

Jutta Croll: Thank you so much, Emma, for giving us that insight on the situation of regulation so far, and we know that it does not address the metaverse at this time, but it will definitely be going that way. And now I’m handing over to Deepali, who was already somehow addressed because she’s coming from META, and I don’t think you will be able to react to all the things that have already been said about service providers like META is one, but I would like to refer to something that Michael said at the beginning, that social media platforms are already virtual worlds, and they are coming on us via our own behavior. So that is something that your company is working on, and could you explain a little bit your position?

Deepali Liberhan: I can. I think it’d be useful to talk a little bit about what the objective is here. The objective that we have is to make sure we have safe and age-appropriate experiences for young people on our platform, and irrespective of regulation, we’ve been working to do that, and it’s across our apps, whether that’s Facebook, Instagram, or the VR and AR products that we offer. We’ve actually adopted a best interests of the child framework that our teams use to develop products and features for young people, and while I won’t go into all of those considerations, I think two important considerations that I do want to talk about. The first is, and it was lovely to hear from you, Miriam, is exactly engagement with young people and families who are using our products, and it’s really important to engage not just teens, but also parents, and we’ve done that where we have, in the last couple of years, we’ve rolled out parental supervision tools across our products, including MetaQuest, which is available in the Metaverse. Why parental supervision tools are really important is exactly to the point that somebody made that, you know, parents don’t necessarily know how to talk to their young kids about the metaverse of virtual reality. We had these consultations and, you know, engagement with parents and teens who are sitting in the same room to help to be able to design our parental supervision tools, and we’ve designed our parental supervision tools in a way that respects the privacy, as well as promotes autonomy for young people, but also, you know, gives parents some amount of oversight and insight on their teen’s activities, especially in the VR space. So, for example, you can see how much time your teen is spending in VR. You can set daily limits. You can schedule breaks. You can also approve and disallow apps. So, these are some of the things that are inbuilt into our parental supervision tools, and I think that it’s really important that, along with these tools, I think that there was a mention of digital literacy as well. We’ve worked with experts to make resources available in our digital hub so that parents can get guidance on how to talk to their kids about virtual reality and about AR and about how to stay safe online. This is, you know, this is one consideration that we have when we’re building for young people. The second is building safe and age-appropriate experiences. So, irrespective of whether you choose to have parental supervision or not, and I think it’s really important to have parental supervision, the other thing that’s really important is what are we doing to make sure that, you know, young people who are using, you know, who are using the Metaverse products that we offer are safe is essentially a set of inbuilt protections that we have. For example, 13 to 17-year-olds have default private profiles, and, you know, if you’ve used Instagram or if you’ve used, you know, if you’ve used MetaQuest, you know a private profile is very different from a public profile. You exactly, you can allow people to follow you. Not everybody can see what you’re doing. The second is that we’ve also, by default, we have a personal boundary, and I don’t know if any of you who’ve used MetaQuest have used that personal boundary. That personal boundary is an invisible bubble that is around you that is on by default, so that is to prevent against unwanted interactions when you’re in, you know, avatar space and you’re engaging with other avatars. The third thing that we’ve done is we’ve also limited, you know, interactions with the other adults that teens are not connected with on your platform, so you want to use the, you know, the Metaverse to connect with your aunt who’s in a different country, but you don’t necessarily want your teen to be able to engage with strangers, and we have built-in protections in place to limit those particular interactions. The other thing that’s really important is, you know, we live in the world of having really clear policies, so all the apps that are listed on the MetaHorizon store, for example, have an age and content rating like the film, you know, like you have a film rating, and teens are not able to, are not even going to download apps where they, which are inappropriate for their particular age, so, and there’s a lot more that we are doing in terms of making sure that we are building those safe and age-appropriate experiences on our platform. The other thing that, you know, that I do want to point out is that, and I know somebody said earlier, that a majority of teens online are right now using it for gaming and entertainment, but at Meta, we also feel like the potential for the metaverse in immersive learning is really, is really immense, and I remember I was in school, and we used to read about, you know, interesting places like the Pyramids of Giza or the Eiffel Tower. In virtual reality, you can have young people who are actually visiting those worlds, and you can actually have young people from different countries, at different economic backgrounds, actually be able to study together, and I think that kind of innovation, any kind of regulatory or legislative regime, it’s also important to, it’s also important to protect and promote that kind of innovation, and Meta is actually, you know, we’ve set aside a 150 million fund just for immersive learning, and there are, you know, many projects that we have that I can talk about. I don’t know how much

Jutta Croll: time is up, but if you want to take some more questions, I’m looking around in the room, I’m probably, there, there we have a question, and we have another question over there. Question is for you. Can you hear me now? Yes, we can hear you.

Audience: So, we heard about privately, privately, I believe the company was, and we know there’s other technology out there to verify children’s age on platform. What is Meta doing to verify age verification on platform?

Deepali Liberhan: So, we have a, we have a number of, you know, we have a number of ways we, we use to assure age on our platforms, and I think one of the important things to, to understand is that it’s, it’s a fairly, it’s a, it’s a fairly complicated area, because we want to make sure that we’re, we’re balancing different principles, and a couple of principles that I’ll talk about. The first is data minimization. We’ve all talked about, you know, many years ago, where we said that it would really be easy for everybody to collect digital IDs at the point of verification, right, but we, you don’t want to do that for multiple reasons. I don’t think any, anybody, least of all regulators or legislators wants companies like us to collect data. So, what is an effective way to assure age, which balances, you know, the, you know, data minimization with effectiveness and with proportionality? We have a number of ways that we, we do age assurance. For example, we have people trained on our platforms, for example, Facebook and Instagram, to identify underage platform, underage users, and they’re able to remove these underage users. We also have invested in proactive technology, and that proactive technology looks at certain kinds of signals. So, for example, the kind of accounts you’re following or the kind of content you’re posting, and those are some of the, some of the ways that we’ve developed, and, you know, proactive technology is something that keeps, keeps on developing, that we’re using to, to identify and remove under, underage accounts. We’re also working with, I don’t know if you’ve heard about YOTI, but YOTI is a third-party organization, like, like Privately, that essentially has come up with a way, in a privacy protective way, to, to be able to identify age range just based on your selfie. So, what we’ve done is, we’ve, we, and you’ll see it if you, if you use Instagram, that if we find that there’s an age liar, if I’m a young, you know, young person who’s 12 years of age, that person is not allowed on the platform. But, for example, if a 15-year-old wants to change their age to 18-year-old, one of the options to verify that person’s age is, you can give your ID, but also you can use YOTI, which is, you take a video selfie, and that selfie is kept for a little amount of time, and then deleted, to be able to identify age. So, in, there are a variety of ways that we’re, that we’re working to assure age on a platform.

Jutta Croll: We will be around to explain later, maybe, on YOTI. We have another question there. Please be brief.

Audience: Hi, just briefly, I’ll also sort of pick up on the age verification. I mean, you talked about data minimization. There are plenty of already available options that do age estimation, age verification, without gathering any data. None of the social media platforms employ those properly, and I think META’s reported several billion dollars of revenues from under-13s in its latest results. So, it’s clear that, whether it’s META or any of your competitors, none are really doing that seriously. So, some of the limits you’ve talked about, about restricting access by age to different content, are lovely, but if you don’t have effective age verification, then they’re also meaningless, sadly. So, it’s a comment rather than a question. I think all the social media platforms could do far better. At the moment, they do the bare minimum, in my opinion, and there’s a lot more that you could do. Thank you.

Deepali Liberhan: I’ll quickly respond to your question. I’m available after this if you want to have a broader discussion. We are working on exploring options to do age assurance in a more effective way. YOTI is one such organization that we work with. It’s been recognized by the German regulators. We’re looking at ways on how to use it at more points of friction on our platforms. The other thing that I would say is that it’s also a broader ecosystem approach. One of the legislative solutions that we’ve talked about that we think is a fine balance between data minimization and also making it really simple for parents is to have age verification or age assurance at the app level or the OS level, which allows parents to oversee approval of their apps in one particular place and also minimizes the collection of data at one place. A lot of third parties have also talked about how this is one of the ways where we can approach this in a broader ecosystem perspective. While those discussions are happening, at the same time, we are working to ensure that we are building more effective ways to do age assurance on our platforms. We’ve also recently launched teen accounts in certain countries, and we’re going to roll out in the rest of the world. We’re also investing in proactive technology to give us the right signals to make sure that the teens are not lying about their age, because, as you know, they will lie about their age.

Jutta Croll: They are doing already. Yes, thank you so much. So, we are a bit under time pressure. That’s why I’m going now to the last block of our session, and that is coming to what can internet governance do to support a common approach to keep children safe in the digital and virtual environment. And I’m handing over to my colleague Torsten, who will give us a short reflection on the Global Digital Compact and what responses the compact does give to children’s rights and the metaverse. Over to you, Torsten.

Torsten Krause: Thank you very much, Jutta. And regarding or with listening to all the thoughts and information shared, we want to have a closer look to the Global Digital Compact, which was adopted in this year’s September. It’s a global digital compact, and it’s a global digital summit of the future. You may recognize it. It is a part of our common agenda of the United Nations and determines the basic principles for shaping the digital environment. It’s not a legally binding document like the Convention on the Rights of the Child, but the states express their commitments to aligning the further development of the digital environment. What does it mean? The GDC describes several objectives, and just to put some of them, it said to close all the digital divides and accelerate the progress across the sustainable development goals, to expand inclusion and to reap the benefits for all, to foster an inclusive, open, safe and secure digital space that respects, protects and promotes human rights and children’s rights are a part of human rights, as you all know. And they declare that to create safe, secure and trustworthy emerging technologies, including AI with a transparent and human-centric approach and effective human oversight. So all what is done should be controlled in the end by humans. When we have a closer look to what is mentioned about children’s rights in the GDC, then I’m not aware of how you’re following the progress in the past. Then in the first draft, there was no children’s rights directly expressed, and several organizations do their stances and give their perspectives and comments to put children’s rights in this compact. And in the end, there are several points where we can touch on, and the biggest area or field of child rights is around protection rights. So the states are asked to strengthen their legal systems and policy frameworks to protect the rights of the child in the digital space. So every one of you can ask their governments how they do it, how they put in place their policy frameworks. The states also should develop and implement national online child safety policies and standards, and they call on digital technologies companies and developers to respect international human rights and principles. We heard some of that in the session. All the companies, developers, and social media platforms should respect human rights online, and to implement measures to mitigate and prevent abuses, inclusive, also with effective remedy procedures, as also Sophie mentioned, in line with the general comment. And the broader part comes to counter and address all forms of violence, including sexual and gender-based violence. Hate speech also is mentioned, like discrimination, misinformation, and disinformation, but also cyber bullying and child sexual exploitation and abuse. Therefore, it’s necessary to monitor and review digital platform policies and practices on countering child sexual exploitation and abuse, and to implement accessible reporting mechanisms for the users. When we have a closer look to what’s about provisions with regard to child rights, then it’s mentioned in the GDC that the states are responsible to connect all schools to the internet, and it’s referred to the GIGA initiative of the ITU and the United Nations Children’s Fund in this way. And with regard to digital literacy, it is said that it’s necessary that children and all users, of course, should meaningfully and securely use the internet and safely navigate the digital space. Therefore, digital space, digital skills, and lifelong access to digital learning opportunities are very important. So, the states are responsible to establish and support national digital skills strategies, adapt teacher training programs, and also adult training programs. That is with regard to your question, Vincent, so that they have in mind that it’s not just necessary to teach the children, the users, but also the responsible person around them, so they can provide support and protect them. When we have a look to participation, then it would be very short. It’s mentioned that meaningful participation of all stakeholders is required, but it’s not said that children be part of that. But in regard of that, it’s so important that children and young people also take part at the internet governance forum at the multi-stakeholder level to bring in their voices and perspective, so that they come in in this meaningful participation process of all stakeholders. That’s what a short overview of the GDC, and I hope it was meaningful.

Jutta Croll: Yes, thank you so much, Torsten. We also would like to remind everybody, if you haven’t had a look at the Global Digital Compact, please do so, and it’s open for giving your consent or for endorsement of individuals, as well as organizations, companies, and so on. So, the more people endorse the Global Digital Compact, the more gravity will get all these recommendations. So, eventually, Hazel, thank you for your patience, waiting in the Zoom room. We are happy to have you here. to give us the perspective of children with a special focus on the Asian Pacific area. Over to you.

Hazel Bitana: Thank you, Jutta. I’m Hazel from Child Rights Coalition Asia or Earth Asia, a network of organizations working for and with children. And I would say that to keep children safe and help them reap the benefits of virtual environments, internet governance would be anchored on child rights principles, which is founded on in general comment number five, as Sophie presented, the principles of the best interest of the child, child participation that takes into consideration children’s evolving capacities, non-discrimination and inclusion and children’s overall development and full enjoyment of all their rights. These are the underlying messages from children, such as when my organization, Child Rights Coalition Asia held our 2024 Regional Children’s Meeting in August in Thailand, where we had 35 child delegates representing their own national or local child-led groups based in 16 countries in Asia. Although we focus our discussions mainly on emerging practices such as sharenting or kid fluencers and generative AI in line with civil and political rights. The recommendations we gathered from this platform could be applied in emerging technologies like the metaverse. Summarizing their inputs, one of the recommendations is an approach that recognizes children as rights holders and not just as passive recipients of care and protection. Children want to be involved and be empowered to be part of the discussions and policymaking processes. They want child-friendly spaces and information, having child-friendly versions of the terms of end conditions or privacy policies that they agree. Another relevant information is one of their key recommendations. Involvement of children in the decision-making process. making processes allows us to have a holistic perspective. We get to learn how children are leveraging these emerging technologies to create positive change, which are not usually highlighted in discussions. When we talk to children about generative AI, they said that it is beneficial for their advocacy work as child rights advocates or child human rights defenders for their recreation, the enjoyment of their right to play and leisure and take part in creative activities and for their education. And I think these are echoed in the Metaverse as well. And by getting children’s perspectives, you also get to see the impact of these emerging technologies on a number of children’s rights. There are already evidence on sexual exploitation and abuse in the Metaverse. And from our regional children’s meeting, the child delegates raised concern on the impact of generative AI to their right to a healthy environment in the context of climate change due to energy consumption of generative AI. And this could be a concern as well when we talk about the Metaverse. Another concern is in relation to the right to privacy and informed consent, especially considering the unique privacy and data protection issues both by generative AI and Metaverse as briefed by a number of our speakers today. And additionally, echoing one of the points from Miriam earlier, a key approach to internet governance is ensuring non-discrimination and inclusion in a number of aspects. For one, due to the price of virtual reality hardware or devices or the internet speed bandwidth required, Metaverse is widening the digital divide. And in terms of freedom of expression, including gender expression, Metaverse has the potential to provide children the platform to enjoy this freedom with avatar serving as a creative tool for expression. Michael expounded earlier, but at the same time without safeguarding these positive potential could instead deepen the discrimination. Cultural diversity should also be taken into consideration to keep children safe in virtual environments, in the metaverse harmful body language, gestures and non-verbal communication are the additional aspects that should be included in the list of violations that children can submit in the reporting mechanism in the metaverse. And this brings me to my next point regarding the importance of having child-friendly reporting mechanism and effective remedies in the metaverse with a diversity of languages, context, social norms, especially in the Asian region, which is always feeling left behind because our language are not always the popular ones in the digital environment. What more now in body language and gestures are included in the context of the metaverse. With this diversity, the importance of having specialized regional or local safety teams should be a part of the internet governance system. This facilitates timely and effective response and prevention mechanisms. And lastly,

Jutta Croll: I need to take out the time because we have now two interventions from the online participants and I want to give them also a bit of time. Maybe they have questions for you or for anyone else. Thank you so much. Sophie, will you hand over to the online participants or will you read out their questions from the chat?

Audience: I had one raised hand when we were talking about children’s rights and the children’s rights blog. It was from Andrew De Alvis, but I’m not sure if they’re still with us because I cannot see them in the participants. participants list anymore, so maybe a participant with a question is already gone. But if not, feel free to raise your hand again. But I think they’re gone. And I have another question from Marie Yves Nadeau. I can read it out. She has a question for a META speaker. The OECD report highlights the impact of virtual reality for children’s development. What prompted META to lower the age requirements? And how does the company address the potential risks to children? Over to you, please. Thanks for the question.

Deepali Liberhan: So, as I said before, we’ve worked with parents and young people as part of our best interests of the child framework. And a lot of parents themselves want to be able to have their kids experience or start experiencing META products in a very protective way. So, we’ve done two things. The first is, you know, 13 to 17-year-olds, which are allowed on our platform, which have certain default protections, as well as, you know, as well as the ability to have parental supervision. For less than that age, when, you know, for less than that age, those accounts are actually managed by the parents. And what we’ve heard from parents is that, and we have a, you know, we did this similarly for Facebook as well. We have Messenger for kids, which is managed by parents, which gives an opportunity for parents to really effectively manage their child’s presence online and be able to deliver all the benefits of that in a very, very protective way. So, we’ve done this with consultation with parents, as well as with experts.

Jutta Croll: Thank you, Deepali. We have now Ansu in the room, not online. So, please go ahead. I can give you only one minute. Yes. And one minute for the response.

Audience: Yeah. So, the question is, have you considered, anyone can answer this, have you considered the design principles for a governance framework? Because I’m a researcher in this area. Thank you.

Jutta Croll: So, you’re asking for the design principles of regulation. Yeah.

Audience: Design principles when developing a governance framework, has anyone considered that? And if so, what?

Jutta Croll: Okay. Will you be able to answer that? So, I think she’s talking about privacy by design, safety by design. Safety by design, child rights by design,

Emma Day: do you mean that kind of design principle? Or just the way that, in general? Yeah. I mean, maybe one of the, there are safety by design and privacy by design regulations, but I think also the theme of the Internet Governance Forum is multi-stakeholder governance, right? And one of the things, Tech legality has been working with UNICEF actually on data governance for edtech, and some of that is related to this with the impressive learning type of work. And what we’ve been looking at is, as part of this is the use of regulatory sandboxes, which there is an organization called Data Sphere Initiative, and they have been looking at trying to make a multi-stakeholder regulatory sandbox. So, bringing together, even across borders, regulators, private sector, and also civil society. And civil society are often the missing piece, but trying to bring all those stakeholders together to look at these frontier technologies and think about how to regulate. So, I don’t know if that answers your question. If you want to hear more about that, we have a panel at 4.30 p.m. today local time, where we’re looking at governance of edtech, neurotech, and fintech, and we’ll look at that multi-stakeholder. Yeah. Thank you.

Jutta Croll: Yes. Thank you so much. Thanks to all of you who have been in the room or taken part online. I have now only three minutes to wrap up, but I will try to do my very best. I do think that we’ve learned that, as we said at the beginning, the virtual environment, virtual worlds, are already inhabited by children. We heard from deep Park that 50% of the 600 million users are under the age of 13. Also, we know that there are age restrictions. And that is mostly due to the fact that we find virtual worlds in the gaming area. But nonetheless, we also heard that social media platforms are already virtual worlds. And that is because we are inhabitants of these social media environments, and we are giving there our data, our profiles, so several information about our identity. That led us to the question of data minimization, which is kind of a controversy to knowing the age of the users, either by age estimation or by age verification, which is always going hand in hand with gathering, collecting data about the users. And that was also something that Meta representative Deepali told us, that they are trying to balance that and trying to minimize the data. We have some regulations, especially the GDPR was mentioned, which is applicable only in Europe. But still, it has been copied to several areas of the world, which give us kind of an orientation what would be the principle of data minimization and how it could also be applied to the metaverse. But then we also learned that there are larger amounts of data and more sensitive data collected. So that would be the reason why we need another level of data governance for the metaverse, considering that we already have a gap of regulation when it comes to virtual reality. Eventually, I would like to go back to the children’s rights. We have learned that we have the area of protection rights, provision rights, and participation rights. And when we are talking about virtual environments, we always have a bit more focus on children’s safety. But still, we have seen that it’s really a huge opportunity to build on the evolving capacities of children to provide them with the education, with peer education in virtual environments. And that will also ensure that they have their right to participation. Finally, we heard that children want to be empowered and involved, and that they understand generative AI as an instrument for children’s advocacy. So I think that is a very important and future-oriented message that we got at the end of our session. And that is the message that I will conclude with. Let’s focus on the opportunities that the metaverse will provide children without neglecting their right to be protected. Thank you so much.

M

Michael Barngrover

Speech speed

163 words per minute

Speech length

1215 words

Speech time

444 seconds

Virtual worlds encompass VR, mixed reality, 3D gaming, and social media platforms

Explanation

Michael Barngrover explains that virtual worlds are a broad concept including various digital environments. He highlights that these range from fully immersive VR experiences to traditional 3D gaming and even social media platforms.

Evidence

He mentions specific examples like Fortnite as a 3D gaming world and notes that social media platforms can also be considered virtual worlds.

Major Discussion Point

The nature and scope of virtual worlds/metaverse

Agreed with

Deepak Tewari

Jutta Croll

Agreed on

Virtual environments are already heavily populated by children

D

Deepak Tewari

Speech speed

159 words per minute

Speech length

1634 words

Speech time

616 seconds

Metaverse has 600 million monthly active users, with 51% under age 13

Explanation

Deepak Tewari provides statistics on metaverse usage, highlighting the significant presence of children. He emphasizes that a majority of users in these virtual environments are minors.

Evidence

He cites specific statistics: 600 million monthly active users, 51% under age 13, and 84% under age 18.

Major Discussion Point

The nature and scope of virtual worlds/metaverse

Agreed with

Michael Barngrover

Jutta Croll

Agreed on

Virtual environments are already heavily populated by children

Privacy-preserving age detection technology exists but faces pushback

Explanation

Deepak Tewari argues that technology for privacy-preserving age detection is available but not widely adopted. He suggests that big tech companies are resistant to implementing such technologies due to potential impacts on their business models.

Evidence

He mentions his company’s development of age-aware cameras and microphones that can detect age in real-time without storing personal data.

Major Discussion Point

Data collection and privacy concerns

Differed with

Deepali Liberhan

Differed on

Effectiveness of age verification methods

J

Jutta Croll

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Social media platforms are already virtual worlds based on user behavior

Explanation

Jutta Croll suggests that social media platforms can be considered virtual worlds due to user behavior. She argues that users inhabit these platforms by providing personal data and creating digital identities.

Major Discussion Point

The nature and scope of virtual worlds/metaverse

Agreed with

Michael Barngrover

Deepak Tewari

Agreed on

Virtual environments are already heavily populated by children

Generative AI seen as tool for children’s advocacy

Explanation

Jutta Croll highlights that children view generative AI as a potential tool for advocacy. This perspective emphasizes the positive potential of emerging technologies for empowering children.

Major Discussion Point

Opportunities and risks of metaverse for children

D

Deepali Liberhan

Speech speed

167 words per minute

Speech length

1881 words

Speech time

675 seconds

Metaverse offers immersive learning opportunities beyond gaming

Explanation

Deepali Liberhan emphasizes the educational potential of the metaverse beyond entertainment. She argues that immersive virtual environments can provide unique learning experiences for children.

Evidence

She mentions the possibility of virtually visiting historical sites like the Pyramids of Giza or the Eiffel Tower, and collaborative learning across geographical and economic boundaries.

Major Discussion Point

Opportunities and risks of metaverse for children

Meta implements default privacy settings and parental controls for teens

Explanation

Deepali Liberhan describes Meta’s approach to child safety in virtual environments. She highlights the implementation of default privacy settings for teens and parental supervision tools across Meta’s products.

Evidence

She mentions specific features like default private profiles for 13-17 year olds, personal boundary settings in VR, and parental controls for time limits and app approvals.

Major Discussion Point

Children’s rights and safety in virtual environments

Agreed with

Sophie Pohle

Lhajoui Maryem

Hazel Bitana

Agreed on

Need for age-appropriate content and safety measures in virtual environments

Differed with

Hazel Bitana

Differed on

Approach to child safety in virtual environments

Need to balance age verification with data minimization principles

Explanation

Deepali Liberhan discusses the challenge of verifying users’ ages while adhering to data minimization principles. She emphasizes Meta’s efforts to find effective age assurance methods that don’t compromise user privacy.

Evidence

She mentions the use of trained personnel to identify underage users, proactive technology using behavioral signals, and third-party solutions like YOTI for age estimation.

Major Discussion Point

Data collection and privacy concerns

Differed with

Deepak Tewari

Differed on

Effectiveness of age verification methods

S

Sophie Pohle

Speech speed

114 words per minute

Speech length

961 words

Speech time

501 seconds

General Comment 25 provides framework for children’s rights in digital environments

Explanation

Sophie Pohle introduces General Comment 25 as a guiding document for children’s rights in digital contexts. She explains that it offers a practical framework for implementing the Convention on the Rights of the Child in digital environments.

Evidence

She outlines the four general principles of GC25: non-discrimination, best interests of the child, right to life and development, and respect for the views of the child.

Major Discussion Point

Children’s rights and safety in virtual environments

Agreed with

Deepali Liberhan

Lhajoui Maryem

Hazel Bitana

Agreed on

Need for age-appropriate content and safety measures in virtual environments

L

Lhajoui Maryem

Speech speed

154 words per minute

Speech length

1313 words

Speech time

510 seconds

Need for age-appropriate content and safer digital experiences for children

Explanation

Lhajoui Maryem emphasizes the importance of creating safe and age-appropriate digital experiences for children. She argues for the need to involve children in the design and policy-making processes of digital platforms.

Evidence

She mentions the use of playful tools and workshops to engage children in discussions about online safety and digital identity.

Major Discussion Point

Children’s rights and safety in virtual environments

Agreed with

Deepali Liberhan

Sophie Pohle

Hazel Bitana

Agreed on

Need for age-appropriate content and safety measures in virtual environments

H

Hazel Bitana

Speech speed

140 words per minute

Speech length

724 words

Speech time

309 seconds

Importance of child-friendly reporting mechanisms and effective remedies

Explanation

Hazel Bitana stresses the need for child-friendly reporting mechanisms and effective remedies in virtual environments. She argues that these systems should consider cultural diversity and language differences, especially in the Asian region.

Evidence

She mentions the need for specialized regional or local safety teams to facilitate timely and effective response and prevention mechanisms.

Major Discussion Point

Children’s rights and safety in virtual environments

Agreed with

Deepali Liberhan

Sophie Pohle

Lhajoui Maryem

Agreed on

Need for age-appropriate content and safety measures in virtual environments

Differed with

Deepali Liberhan

Differed on

Approach to child safety in virtual environments

Children want to be involved in policymaking processes

Explanation

Hazel Bitana emphasizes children’s desire to be actively involved in discussions and policymaking related to digital environments. She argues for an approach that recognizes children as rights holders rather than passive recipients of protection.

Evidence

She cites inputs from the 2024 Regional Children’s Meeting in Thailand, where child delegates expressed their desire for involvement and child-friendly information.

Major Discussion Point

Regulation and governance of virtual environments

Potential for creative expression but also deepening discrimination

Explanation

Hazel Bitana discusses the dual nature of the metaverse for children’s expression. While it offers opportunities for creative expression through avatars, she warns that without proper safeguards, it could deepen existing discrimination.

Major Discussion Point

Opportunities and risks of metaverse for children

E

Emma Day

Speech speed

151 words per minute

Speech length

1267 words

Speech time

500 seconds

Unprecedented amounts of sensitive data collected in metaverse

Explanation

Emma Day highlights the increased data collection in metaverse environments. She explains that this data can be more sensitive and voluminous than traditional online platforms, potentially including physiological responses and brainwave patterns.

Evidence

She mentions potential uses of this data for targeted marketing, commercial surveillance, or government surveillance.

Major Discussion Point

Data collection and privacy concerns

Existing regulations like GDPR apply but new challenges emerge in metaverse

Explanation

Emma Day discusses the applicability of existing regulations like GDPR to the metaverse. She points out that while these regulations still apply, the metaverse presents new challenges in areas like determining data controllers and processors.

Evidence

She gives an example of the complexity of providing privacy notices in different parts of a virtual environment.

Major Discussion Point

Regulation and governance of virtual environments

Multi-stakeholder approach needed for governance frameworks

Explanation

Emma Day advocates for a multi-stakeholder approach to developing governance frameworks for new technologies. She suggests using regulatory sandboxes to bring together regulators, private sector, and civil society to address frontier technologies.

Evidence

She mentions work with UNICEF on data governance for edtech and the Data Sphere Initiative’s efforts to create multi-stakeholder regulatory sandboxes.

Major Discussion Point

Regulation and governance of virtual environments

T

Torsten Krause

Speech speed

123 words per minute

Speech length

743 words

Speech time

361 seconds

Global Digital Compact calls for protecting children’s rights in digital spaces

Explanation

Torsten Krause introduces the Global Digital Compact as a framework for shaping the digital environment. He highlights its emphasis on protecting children’s rights in digital spaces.

Evidence

He cites specific objectives from the GDC, including closing digital divides, fostering inclusive digital spaces, and creating safe and trustworthy emerging technologies.

Major Discussion Point

Data collection and privacy concerns

States responsible for implementing child safety policies and standards

Explanation

Torsten Krause emphasizes the responsibility of states in implementing child safety measures in digital environments. He argues that states should develop and enforce national online child safety policies and standards.

Evidence

He references the GDC’s call for states to strengthen legal systems and policy frameworks to protect children’s rights in digital spaces.

Major Discussion Point

Regulation and governance of virtual environments

A

Audience

Speech speed

140 words per minute

Speech length

781 words

Speech time

333 seconds

Risk of addiction and disconnection from reality

Explanation

An audience member raises concerns about the potential for addiction to virtual environments. They question how to prevent children from becoming overly immersed in the metaverse and disconnecting from reality.

Major Discussion Point

Opportunities and risks of metaverse for children

Agreements

Agreement Points

Virtual environments are already heavily populated by children

Michael Barngrover

Deepak Tewari

Jutta Croll

Virtual worlds encompass VR, mixed reality, 3D gaming, and social media platforms

Metaverse has 600 million monthly active users, with 51% under age 13

Social media platforms are already virtual worlds based on user behavior

Speakers agree that virtual environments, including social media platforms, are already widely used by children and constitute a significant part of their digital experience.

Need for age-appropriate content and safety measures in virtual environments

Deepali Liberhan

Sophie Pohle

Lhajoui Maryem

Hazel Bitana

Meta implements default privacy settings and parental controls for teens

General Comment 25 provides framework for children’s rights in digital environments

Need for age-appropriate content and safer digital experiences for children

Importance of child-friendly reporting mechanisms and effective remedies

Multiple speakers emphasize the importance of creating safe, age-appropriate digital experiences for children with proper safeguards and reporting mechanisms.

Similar Viewpoints

These speakers highlight the tension between effective age verification and data privacy concerns in virtual environments, acknowledging the need for balance and the challenges posed by extensive data collection.

Deepak Tewari

Deepali Liberhan

Emma Day

Privacy-preserving age detection technology exists but faces pushback

Need to balance age verification with data minimization principles

Unprecedented amounts of sensitive data collected in metaverse

Both speakers advocate for involving children in the design and policy-making processes of digital platforms, emphasizing the importance of children’s perspectives in creating safe and appropriate digital experiences.

Lhajoui Maryem

Hazel Bitana

Need for age-appropriate content and safer digital experiences for children

Children want to be involved in policymaking processes

Unexpected Consensus

Potential of virtual environments for education and advocacy

Deepali Liberhan

Jutta Croll

Hazel Bitana

Metaverse offers immersive learning opportunities beyond gaming

Generative AI seen as tool for children’s advocacy

Potential for creative expression but also deepening discrimination

Despite discussions largely focusing on risks and safety concerns, there was unexpected consensus on the positive potential of virtual environments for education and children’s advocacy. This highlights a balanced view of the metaverse’s impact on children.

Overall Assessment

Summary

The main areas of agreement include the widespread use of virtual environments by children, the need for age-appropriate content and safety measures, and the challenges of balancing age verification with data privacy. There was also recognition of the potential benefits of virtual environments for education and advocacy.

Consensus level

Moderate consensus was observed among speakers on key issues. While there were differing perspectives on implementation details, there was general agreement on the importance of protecting children’s rights in virtual environments. This consensus suggests a shared foundation for developing policies and regulations for children in the metaverse, but also highlights the need for continued dialogue to address complex challenges like data privacy and age verification.

Differences

Different Viewpoints

Effectiveness of age verification methods

Deepak Tewari

Deepali Liberhan

Privacy-preserving age detection technology exists but faces pushback

Need to balance age verification with data minimization principles

While Deepak Tewari argues that effective privacy-preserving age detection technology exists and should be implemented, Deepali Liberhan emphasizes the need to balance age verification with data minimization, suggesting current solutions may not fully address privacy concerns.

Approach to child safety in virtual environments

Deepali Liberhan

Hazel Bitana

Meta implements default privacy settings and parental controls for teens

Importance of child-friendly reporting mechanisms and effective remedies

Deepali Liberhan focuses on platform-specific safety measures implemented by Meta, while Hazel Bitana emphasizes the need for broader, culturally sensitive reporting mechanisms and remedies across virtual environments.

Unexpected Differences

Perception of generative AI by children

Jutta Croll

Hazel Bitana

Generative AI seen as tool for children’s advocacy

Potential for creative expression but also deepening discrimination

While Jutta Croll presents a positive view of children using generative AI for advocacy, Hazel Bitana highlights both the creative potential and the risk of deepening discrimination. This unexpected difference shows the complexity of emerging technologies’ impact on children’s rights.

Overall Assessment

summary

The main areas of disagreement revolve around the implementation of age verification technologies, the approach to child safety in virtual environments, and the involvement of children in policymaking processes.

difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of protecting children’s rights in virtual environments, speakers differ significantly in their proposed approaches and solutions. These differences highlight the complexity of balancing privacy, safety, and children’s participation in the rapidly evolving digital landscape, particularly in the context of the metaverse. The implications of these disagreements suggest that a multi-stakeholder approach, as proposed by Emma Day, may be necessary to develop comprehensive and effective governance frameworks for children’s safety and rights in virtual environments.

Partial Agreements

Partial Agreements

All speakers agree on the importance of child safety in virtual environments, but differ in their approaches. Deepali Liberhan focuses on platform-specific measures, while Hazel Bitana and Lhajoui Maryem advocate for more direct involvement of children in policymaking and design processes.

Deepali Liberhan

Hazel Bitana

Lhajoui Maryem

Meta implements default privacy settings and parental controls for teens

Children want to be involved in policymaking processes

Need for age-appropriate content and safer digital experiences for children

Similar Viewpoints

These speakers highlight the tension between effective age verification and data privacy concerns in virtual environments, acknowledging the need for balance and the challenges posed by extensive data collection.

Deepak Tewari

Deepali Liberhan

Emma Day

Privacy-preserving age detection technology exists but faces pushback

Need to balance age verification with data minimization principles

Unprecedented amounts of sensitive data collected in metaverse

Both speakers advocate for involving children in the design and policy-making processes of digital platforms, emphasizing the importance of children’s perspectives in creating safe and appropriate digital experiences.

Lhajoui Maryem

Hazel Bitana

Need for age-appropriate content and safer digital experiences for children

Children want to be involved in policymaking processes

Takeaways

Key Takeaways

Virtual worlds/metaverse are already widely used by children, with 51% of users under age 13

Existing regulations like GDPR apply to metaverse but new challenges emerge around data collection and privacy

Children’s rights frameworks like General Comment 25 should guide metaverse governance

Age verification and parental controls are important but must be balanced with data minimization

Metaverse offers opportunities for education and creative expression but also risks like addiction and discrimination

Multi-stakeholder approach involving children is needed for effective governance

Resolutions and Action Items

States should implement national online child safety policies and standards as per Global Digital Compact

Companies should conduct child rights impact assessments for metaverse products

More research needed on impacts of virtual reality on child development

Develop child-friendly reporting mechanisms for metaverse environments

Unresolved Issues

How to effectively verify age in metaverse without excessive data collection

Appropriate consequences for virtual crimes or harms

How to close digital divides in access to metaverse technologies

Balancing innovation with protection in metaverse regulation

Suggested Compromises

Age verification at device/OS level rather than by individual platforms to minimize data collection

Use of privacy-preserving age estimation technologies instead of strict verification

Allowing children access to metaverse under parental supervision with strong default protections

Thought Provoking Comments

With virtual worlds, it is a challenging term, because it’s very broad and very encompassing. So there are at least four broad concepts of virtual worlds that I think are very relevant.

speaker

Michael Barngrover

reason

This comment provided a framework for understanding the complexity and breadth of virtual worlds, setting the stage for a more nuanced discussion.

impact

It led to a deeper exploration of different types of virtual environments and their implications for children, moving the conversation beyond simplistic notions of the metaverse.

There is a cognitive load to be thinking about and managing your activities and your presence, and thus the activities and presence of others in multiple worlds, the non-digital world, but also the digital virtual worlds.

speaker

Michael Barngrover

reason

This insight highlighted an often overlooked aspect of virtual worlds – the cognitive demands they place on users, especially children.

impact

It shifted the discussion to consider the psychological and developmental impacts of virtual worlds on children, beyond just safety concerns.

Technology exists today from companies like Privately, which have developed fully privacy-preserving, GDPR-compliant solutions for detecting age of users.

speaker

Deepak Tewari

reason

This comment introduced concrete technological solutions to age verification, a key challenge in protecting children online.

impact

It moved the conversation from theoretical concerns to practical solutions, sparking discussion about implementation and effectiveness of such technologies.

The metaverse is an evolving space and some of it we’re talking about is a little bit futuristic and may not actually exist yet, but we’re talking about a kind of virtual space where unprecedented amounts of data is collected from child users and adult users as well.

speaker

Emma Day

reason

This comment grounded the discussion in reality while highlighting the potential future challenges, particularly around data collection.

impact

It refocused the discussion on the need for proactive regulation and governance to address future challenges in virtual environments.

Children want to be involved and be empowered to be part of the discussions and policymaking processes. They want child-friendly spaces and information, having child-friendly versions of the terms of end conditions or privacy policies that they agree.

speaker

Hazel Bitana

reason

This comment brought the crucial perspective of children themselves into the discussion, emphasizing their desire for agency and understanding.

impact

It shifted the conversation from a protective stance to one that also considered children’s rights to participation and information, leading to a more balanced discussion of safety and empowerment.

Overall Assessment

These key comments shaped the discussion by broadening its scope from a narrow focus on safety to a more comprehensive consideration of children’s experiences in virtual worlds. They introduced technical, legal, and child-centric perspectives, leading to a richer, more nuanced dialogue about the challenges and opportunities of the metaverse for children. The discussion evolved from defining virtual worlds to exploring their cognitive impacts, from theoretical concerns to practical solutions, and from adult-centric protectionism to child-empowering approaches.

Follow-up Questions

How can addiction to the metaverse be prevented in children?

speaker

Youth ambassador from Hong Kong

explanation

This is important to address potential negative impacts of immersive virtual environments on children’s wellbeing and development.

How can parents and educators be effectively involved when it comes to children in the metaverse?

speaker

Audience member from India

explanation

Parental and educator involvement is crucial for ensuring children’s safety and positive experiences in virtual environments.

What are the design principles for developing a governance framework for the metaverse?

speaker

Audience member Ansu

explanation

Establishing clear design principles is important for creating effective and ethical governance structures for virtual environments.

How can age verification be implemented more effectively on social media platforms without compromising data minimization principles?

speaker

Audience member (unnamed)

explanation

Balancing effective age verification with data protection is crucial for ensuring child safety while respecting privacy rights.

What prompted META to lower the age requirement for the Quest, and how does the company address the potential risks to children?

speaker

Marie Yves Nadeau (online participant)

explanation

Understanding the rationale behind age requirement changes and associated risk mitigation strategies is important for assessing the impact on child safety.

How can cultural diversity be effectively incorporated into safeguarding measures in the metaverse?

speaker

Hazel Bitana

explanation

Ensuring cultural sensitivity in safety mechanisms is crucial for creating inclusive and effective protection for children from diverse backgrounds.

How can child-friendly reporting mechanisms and effective remedies be implemented in the metaverse?

speaker

Hazel Bitana

explanation

Developing accessible and effective reporting and remedy systems is essential for protecting children in virtual environments.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

IGF 2024 Newcomers Session

Session at a Glance

Summary

This discussion focused on the Internet Governance Forum (IGF), its history, structure, and role in shaping internet policy. The IGF emerged from the World Summit on the Information Society in the early 2000s as a multi-stakeholder platform for dialogue on internet issues. It brings together governments, civil society, the private sector, and technical communities to discuss emerging internet topics and best practices.

Key features of the IGF include its bottom-up agenda setting, inclusivity, and non-commercial nature. The forum has grown significantly since its first meeting in 2006, now attracting thousands of participants. It produces outputs like best practice forums and policy networks on topics such as cybersecurity, AI, and internet fragmentation. The IGF also supports national and regional initiatives to foster local internet governance discussions.

The speakers emphasized the IGF’s commitment to multi-stakeholder engagement, capacity building, and addressing the UN Sustainable Development Goals. They highlighted the forum’s role in facilitating knowledge exchange, influencing policy processes, and promoting an ethical, rights-based approach to internet governance. The discussion also touched on the upcoming renewal of the IGF’s mandate and its potential role in implementing the Global Digital Compact.

Audience questions addressed topics such as including ethics in IGF discussions, engaging media stakeholders, and incorporating emerging technologies like blockchain. The speakers encouraged active participation from newcomers and emphasized the IGF’s openness to community-driven improvements and new ideas.

Keypoints

Major discussion points:

– History and purpose of the Internet Governance Forum (IGF)

– Structure and components of the IGF, including stakeholder groups, best practice forums, policy networks, etc.

– Growth and evolution of the IGF over time

– Importance of multi-stakeholder participation and bottom-up agenda setting

– Connections between the IGF and other UN processes/SDGs

Overall purpose:

The purpose of this discussion was to provide an overview and introduction to the Internet Governance Forum for newcomers and first-time attendees. The speakers aimed to explain the IGF’s history, structure, and importance, while encouraging active participation from all stakeholders.

Tone:

The overall tone was informative and welcoming. The speakers were enthusiastic about the IGF and eager to share information. There was an emphasis on inclusivity and openness throughout. Toward the end, the tone became more interactive as audience members asked questions and shared their perspectives, creating a collaborative atmosphere.

Speakers

– Chengetai Masango: Head of the Secretariat of the Internet Governance Forum

– Nnenna Nwakanma: Digital Policy, Advocacy and Cooperation Strategist

– Audience: Various audience members asking questions

Full session report

The Internet Governance Forum (IGF) Discussion: An In-Depth Overview

This comprehensive summary provides an extensive look at a discussion centered on the Internet Governance Forum (IGF), its history, structure, and role in shaping internet policy. The discussion featured Chengetai Masango, an IGF Secretariat representative, as the primary speaker, with contributions from audience members.

1. History and Purpose of the IGF

The IGF emerged from the World Summit on the Information Society in the early 2000s as a multi-stakeholder platform for dialogue on internet issues. Chengetai Masango emphasized that the IGF was established to address internet governance issues through inclusive, multi-stakeholder dialogue. This approach brings together governments, civil society, the private sector, and technical communities to discuss emerging internet topics and best practices.

Since its inception, the IGF has experienced significant growth. What began as a modest forum has now evolved into a global event attracting thousands of participants. The current event had 8,800 registrations, while the previous event in Japan saw over 10,000 participants. This growth underscores the increasing importance of internet governance discussions on the world stage.

2. Structure and Activities of the IGF

A key feature of the IGF is its bottom-up agenda setting process. Masango highlighted that the forum’s themes and topics are determined through community input, ensuring that discussions remain relevant and responsive to current issues in the internet governance landscape.

The IGF produces several tangible outputs, including:

– Best practice forums

– Policy networks

– Dynamic coalitions

These outputs cover a wide range of topics such as cybersecurity, artificial intelligence, and internet fragmentation. The forum also supports national and regional initiatives to foster local internet governance discussions, allowing for more granular, context-specific dialogues.

Masango noted that the IGF has various engagement tracks, including business, parliamentary, and youth tracks. This structure ensures that diverse perspectives are represented and that the forum remains inclusive of different stakeholder groups.

3. Impact and Partnerships

The discussion highlighted that the IGF has led to concrete outcomes in some regions, such as the establishment of Internet Exchange Points. This demonstrates the forum’s potential to influence real-world internet infrastructure development.

Masango emphasized that the IGF aligns its work with the UN Sustainable Development Goals, showcasing its commitment to broader global development objectives. The forum also partners with various UN agencies, regional bodies, and private companies to extend its reach and impact. A specific partnership with ICANN was mentioned, highlighting collaborative efforts in capacity building and outreach.

4. Participation in the IGF

The speakers encouraged active participation from newcomers and emphasized the IGF’s openness to community-driven improvements and new ideas. Masango explained that individuals can join IGF working groups and mailing lists to get involved. Additionally, national and regional IGFs allow for local participation, while online participation enables remote engagement for those unable to attend in person.

The IGF offers a travel support program to facilitate participation from developing countries and underrepresented groups. Masango also highlighted the IGF’s website as a valuable resource for accessing reports, documents, and other materials related to internet governance discussions.

5. Inclusivity and Accessibility

In response to an audience question, Masango addressed the IGF’s commitment to inclusivity, particularly regarding people with disabilities. He mentioned ongoing efforts to improve accessibility at IGF events and encouraged continued feedback and suggestions from the community to enhance these efforts.

6. Media Engagement

An audience member from Kenya highlighted the existence of an African media caucus and suggested formalizing media participation in the IGF by creating a dedicated caucus. This proposal underscores the importance of media engagement in internet governance discussions.

7. Non-Commercial Nature and Community Support

Masango emphasized the IGF’s non-commercial nature and its dependence on community support. He encouraged participants to contribute their time and expertise to various IGF initiatives and working groups.

8. Future Directions and Challenges

The discussion touched on several potential future directions for the IGF:

– Ethics: An audience member suggested formally including ethics as part of the IGF’s overall aspirations, particularly in relation to AI and human rights discussions.

– Emerging Technologies: An online participant proposed reintroducing distributed ledger technology (blockchain) as a major discussion topic, given its growing importance in the digital economy.

– Organizational Meetings: A long-time participant suggested using the IGF as a platform for organizational meetings and partner gatherings to maximize the value of in-person events.

9. IGF Renewal and WSIS Plus 20 Review

Towards the end of the discussion, Masango mentioned the upcoming renewal of the IGF mandate and the WSIS Plus 20 review. These processes will be crucial in shaping the future of the IGF and its role in global internet governance. The Global Digital Compact, which references the IGF, was also mentioned as an important development in this context.

In conclusion, this discussion provided a comprehensive overview of the Internet Governance Forum, its history, structure, and ongoing evolution. It highlighted the IGF’s commitment to multi-stakeholder engagement, capacity building, and addressing global internet governance challenges. The discussion also emphasized the forum’s openness to new ideas and community-driven improvements, encouraging active participation from all stakeholders in shaping the future of internet governance.

Session Transcript

Chengetai Masango: who don’t. So, and the internet at that time, around the turn of the century, 2000, was increasing in relevance, economically, socially. It was no longer just an academic network. And we had our first, you know, MySpaces and et cetera. I don’t know if those who are old enough to remember those kind of things. So, and this was when Kofi Annan was Secretary General. And so the Secretary General decided that yes, there will form a World Summit on the Information Society. And ITU was one of the lead organizations with UNDP, UNESCO, DESA, et cetera. And as Omer said, there were two phases. Phase one was in Geneva. And when they were discussing, they found out that, okay, we’re discussing something called internet governance. But what exactly is internet governance? Nobody really knew what internet governance was. So what they did, as in all good UN processes, they formed a working group on internet governance. And this working group was actually made up of multi-stakeholders. So governments, civil society, the private sector, as well as IGOs. And they came up with a report, which was presented in the second phase in Tunis, which also formed part of the Tunis agenda. And the mandate of the IGF is written in paragraph 72 of the Tunis agenda, which called upon the United Nations Secretary General to convene a meeting by the end of 2006, which would discuss, oh yes, we asked the United Nations Secretary General in an open and inclusive process to convene by the second quarter of 2006. 2006, a meeting of a new forum of multi-stakeholder policy dialogue. So this was the most important thing, multi-stakeholder policy dialogue. And at that time, this concept of multi-stakeholder policy dialogue, which was envisioned here, was new, especially to the UN system. And at that time, multilateral meant just governments speaking together, and civil society, and the companies, et cetera, would be observers, if that. But now we realize that for the Internet, most of the knowledge, of course, was in the technical community, and also civil society had to have a say. So the main thing about the IGF is that everybody has a say, everybody participates on an equal footing at the IGF. And part of it was to, of course, engage stakeholders, identify emerging issues, build capacity as well, which was also very important, especially for the global South, especially for the youth, elderly as well, people with disabilities. We had to make sure that all these people were involved and nobody was left behind. So the first meeting of the IGF and OMA, slight correction, was in Athens. And there in Athens I think we had over, I don’t know, about 800 people coming together. Today we have a registration of over 8,800 people. So we’ve grown over the years, which is very good. So the key points of the IGF, it’s bottom-up agenda setting. When we set our agenda, we do send out a call for themes, and anybody can participate and say what is the theme. the theme of the year. Last year, it was most definitely AI. The year before that, it was internet fragmentation, et cetera. So each year, there’s a new hot topic there. And we also do emerging issues, which is also very important. And we have capacity building. I don’t know, I’ve been talking a lot. Would you want to say a little bit of what we do with… This is actually the most exciting part about the IGF, I think, the intersessional work that we do during the year. So it’s just not planning the global forum at the, which are here now. We have the best practice forums and the policy networks. And what we do is we look at hot topics, things that we think that you and the stakeholders, our community would be interested in, and the mag decides on which types of best practice forums and policies. So what we do is we get together, we talk, and we come up with what’s the best practice, what we look at standards, do comparisons of them, and then we produce a report. So I encourage you to go to the IGF’s website, and you’ll see the different work that we do over the past few years. They change after a while, but the best practice forum that we have today is cybersecurity. For policy networks, we have three. It’s AI, internet fragmentation, and meaningful access. So then we have the cooperation of the NRIs and dynamic coalitions. I encourage everybody to also, again, go to the internet, to the website, and to see if your country has a, or your region has a national or regional IGF and participate. So you just don’t have to participate at the end of a year. You can try to be active during the entire year. We’ve moved on. Okay. So then we have the two major groups. We have the IGF leadership panel and the multi-stakeholder advisory group. So the MAG, which we call, I’m the current chair, and the chair’s selected based on the cycle we are for each stakeholder group that we have. So currently, I’m actually government stakeholder, and that was for this year. And then we have the leadership panel, which we collaborate together in order to come up with the great stuff that we have today and that you see here today. And as you can see, the chair for the leadership panel is Vint Cerf, one of the founders of the internet underlying architecture. And the vice chair is Maria Ressa. She’s a Nobel Peace Prize winner. And of course, we also have… a selection of people who represent each stakeholder group. For Africa, we’ve got Gebenga Sessan, and for the business community, we’ve got Maria Fernanda Gaza from the ICC basis, and she’s also based in Mexico. She’s a small business owner in Mexico. We also do have the Secretary General’s Envoy on Technology, as a ex-official member, and the three, the past host, so for Japan, has currently changed a bit, the present host, and today, we will also have the next host, which is Norway, joining the leadership panel. And as it says there, the IGF Secretariat is based in Geneva. We’re a very small secretariat. We’re only seven people, full-time, and then we do have consultants, but we do depend on community support. That’s one of the things that, we are like the internet, open-sourced. There are, of course, some people who are full-time, but most of the people are volunteers, like Omer was, and that also has helped. And again, key principles, open, multi-stakeholder, bottom-up, inclusive, transparent, non-commercial, and community-centered process. We do not charge anything for the IGF, and also the national and regionals don’t charge anything for the IGF as well. These are where we started. We started off in Athens, and then we went to Brazil. And last year, we were in Kyoto, and the year before that, we were in Addis Ababa. We do try and move the IGF meeting from region to region, just to make sure that people do have access, because if we have it, okay, we have it here, for instance, people from South America, it’s very difficult for them to come. So, but when we have it in Brazil, it’s the other way around. But we do. sure that people can come and participate physically if they want to more easily. And over the past years, and this is when we had COVID. So we had an online meeting and from then we have actually increased our online and remote participation so that it’s seamless between the two. Next. Yes. So over the years, Secretary General has attended. This was in Paris where we had the president of France there. And of course, Engel Merkel of Germany as well. And as I noted, remote participation is also very important. We’ll go to the next. Right. So that brings us to where we are today. The MAG looked at the topics from the that the community sent in. And what we thought this year was that since there’s so many cross cutting topics that we would look at it as a holistic view. So we ended up with building a multi stakeholder digital future. And as Shanghai was saying that the multi stakeholder part that’s you, me, everybody is extremely important to the entire process and moving forward. So we have enhancing the digital contribution to peace, development and sustainability, advancing human rights and inclusion in the digital age, harnessing innovation and balancing risks and the space, improving digital governance for the internet we want. And we thought that these four will help us to build our multi stakeholder digital future. One of the so we discuss and there has been at the beginning, at least, you know, oh, we’re just a talk shop, etc. Well, we’re not just a talk shop. But also, there’s nothing wrong with being a talk shop, it is very important to have stakeholders who normally don’t meet to discuss issues and learn from each other, and then go home and discuss what they have learned here, and implement it at their home station. And this is the third mandate of the IDF. And we’re due for renewal next year. And we have had studies to see what is the actual impact of the IGF. And one famous example, which I always quote, is that, for instance, the establishment of the Internet Exchange Point in East Africa was because they came, the regulatory authority came to the IGF, met people from Packet Clearinghouse, et cetera, had a discussion, went home, continued that contact, and a IXP was established. And the effect of that ISP was that usually Internet traffic would come directly from England, et cetera. Now that, and even if it’s local, if I wanted to send an email and I was in East Africa next door, it would first go to England and then come back. So with the establishment of the IXP, all the traffic was local, which, of course, brought down the cost of the Internet because there’s no intercontinental traffic, et cetera. And that’s also something that we are proud of, or I’ll say I’m proud that the IGF has managed. Can we just go back? Sorry. And yes, I’m taking too long. And also, we do have outputs, as Carol just said. We have best practice forums where we discuss what are best practices or actually good practices, and also highlight bad practices as well, because it’s very important to see. And the world is not a uniform world. There’s some where economies of scale can happen, but then we also have small island developing states where it’s very difficult to have that connection. But people have faced those problems or faced those challenges, and it’s very good to have that knowledge exchange on how they have solved that problem, so people don’t have to reinvent the wheel. the wheel. And as part of our outputs, we do have the IGF messages and we try and couture our outputs so that they can be inputs into other processes. And we’ve also seen that the IGF has been mentioned in G7 processes, in African Union processes as well. So we do see the effects. And that’s why I say the biggest effect of the IGF is the second order effects that we have. And you want to take this one? No? Okay. Yes. As we said, these are the different parts of the IGF. And it has grown over the years. As I said, in Athens, we had about 800. Now we have a factor of 8,000. So we’ve grown by a factor of 10. In Japan, we had over 10,000 people. So yes. So we have business engagement. We have sessions for business because it is very important. We have a parliamentary and judiciary track. Parliamentarians are the ones who set those public policy instruments. And it’s very important that they have an understanding of how their laws can affect normal internet life. For instance, how do you set laws to combat counterfeiting, et cetera. If you just block IP addresses from a server, you might block not just that server that’s selling fake Gucci handbags, but a whole lot. The same server may be having a hotline that helps people with disabilities, et cetera. So how to combat those and also how knowledge sharing amongst parliamentarians. There’s some parliamentarians who are more advanced. The European Parliament is quite well known for having good policy instruments, the GDPR, et cetera. And they can share how they do things. Of course, newcomers that were here. And we also try and encourage newcomers to go around. Remote hubs for people who cannot make it. Instead of watching the IGF by themselves, they can gather universities or some businesses and actually participate as a group, which we find that actually encourages more debate and more interaction. We do have travel support. I know some of you are here by the travel support. It’s not as much as we would like, but at least it’s something to bring people in. Youth engagement. Youth are our future, of course, and we really do want them to engage. There’s a summer school, there’s a South School, which is South American, there’s Asia-Pacific, and also there’s Eurodig as well for Europe. And inclusion is, of course, something that we actually do look at. After every single IGF, we look at the statistics, we see who has participated, we see who we are missing, and then we look and see how can we include them for the next IGF. It’s very important for us. So if you’re wondering how you could continue your experience from today, you can join any one of these groupings that you see here, and we look forward to persons giving their input because, again, it’s multi-stakeholder, and that means you. So you could join a best practice forum, there are mailing lists, and you can help to formulate the outcomes. There’s also the policy networks, as we said, on internet fragmentation, meaningful access, artificial intelligence. So if you want to be a part, if you have something to offer, if you want to learn something, then one of these groups, you should join and participate. And then we have the dynamic coalitions, there are 32 of those, and they cover a range of topics, blockchain, child rights online, connectivity, internet values, data, health, DNS, gender. So there’s something here for everyone. So we really encourage you to please join the mailing list and let your voices be heard. So here we have the multi-stakeholder groups, civil society, academia and research organizations, tech companies, intergovernmental and international organizations, governments. industries, and technical communities, so we’re all part of the IGF. So what we try to do as well when we form the program is to also look at the SDGs in order to help facilitate meeting the goals for 2030. So we discuss issues that shape the digital future, accountability, accountability and trust, trust is very important these days, innovation, even though we talk about issues, we can’t let the issues downplay the innovation that technology will allow you. So we try to do a balance between that and find out how we can balance, we look at standards and values, which is very important, what’s the norm of a country, of a member state, what’s the culture, we can’t leave those things out. Then we brainstorm with decision makers and users so that we can form a digital future today that the future could enjoy. So we have partners and corporations that help us in this endeavor. We have over 174 national and regional IGFs, as you can see from all over the world, and I think one of the, Saudi Arabia did join a couple of weeks ago, so there is an IGF and the other one, one of the later ones is Ireland, the Irish IGF was also formed. Is there a Norwegian IGF? So So, please, when you go back to your countries, see where the national IGF is. If there’s no national IGF, you’ve got two choices. Join the regional or sub-regional IGF. For instance, in Africa, there’s also a Southern African IGF, there’s a North African IGF, et cetera. Please do join and participate, or just come to our website and see where you can join. And if there isn’t, make one. Because the Internet is for everybody, and we will help you form that IGF. Youth? Yes, and we also have youth IGFs, so national, sub-regional, and youth IGFs as well. So yes, please join us. I know it may be a little bit intimidating. We have over 300 sessions. Just go, take it bit by bit, and in maybe not this year, but in a year or two, you’ll be able to navigate seamlessly throughout. And talk to people. I mean, like, Omer here, or anybody else. We are very approachable. Myself, even, Carol, the chair, we’re very approachable, and we will talk to you about things, yes. IGF and other processes? Okay. As Carol has said, we try and join with the UN 2030 Agenda and the SDGs. Each of our sessions are aligned with one or more of the SDGs. We’re currently behind. COVID did give us, make us be a little bit behind, and there is a question whether or not we will make those goals. But as you know, ICTs are seen as an accelerator, so we’ll see how. And as far as our partners are concerned, we have internal, basically the whole of the UN is our internal partners. We work very well with the ITU, UNESCO, UNDP, UN Habitat, UN Environment as well. So we do work. And also outside, we work with the African Union, as I said. EU, we work with the regional, like Economic Commission for Africa, Economic Commission for Asia, ECE, Economic Commission for Europe, et cetera, and companies. We do work with Google, Meta, Disney, and even the small companies as well. As I said, the person who is on our leadership panel is actually in Mexico, and also ICC that has been passed, the Global Digital Compact, and the GDC did mention the idea as one of the methods that can help with the implementation of the GDC, and due to our strong multi-stakeholder component as well, and we look forward to seeing how we can do that, and we also call upon you to help us as well with that. And next year, or starting now actually, there is the WSIS Plus 20 review, which will be the main focus for next year, and it’s also coming up to the renewal of the IGF for next year. We don’t know how long it’s going to be renewed for, but hopefully we will see. I will not even guess. I’ll not speculate, but we do need your support for that to happen. So if you find the IGF useful, please help in supporting us for the renewal during the course of next year. Thank you. And with that, I also would like, we’ve got 13 minutes for questions. Anything you’d want to talk about, we’re here. That was very interesting.

Audience: I’m a first-timer at IGF, and I was looking at the SDGs and some of the values you had, for example, standards values. I was wondering whether it’s time to include ethics as part of, you know, the overall sort of aspirations of the IGF.

Chengetai Masango: Yes. Okay. is very important, our policy network for AI also is talking about, because we need an ethical internet. We need ethical AI. And we are looking at ways of how to integrate that into all our processes so that it’s designed that way. So ethics, we are human rights based as well. So everything that we do, everything as the UN does, well, one of our foundational documents is, of course, the Declaration of Human Rights. And that also seeps its way into what the IGF does as well. And that should include ethics. Anj, if you could go to slide 9. Sorry. No, keep going. Oh, that’s the one with the topics, right here. So you’ll see here things like advancing human rights and inclusion in the digital age. You’ll find in that part, you’ll see a lot about ethics. Also, in digital governance, ethics is covered there. We do have an online. Mohibi, can we get the question? While they’re figuring that one out, yes, please.

Audience: Good morning. Appreciate your efforts. We are considered the sons of IGF. We are consultative status, and we are participated in this IGF since five years ago. And I take this opportunity to call you to our booth because we have launched a platform for protecting intellectual property in the digital area, aligning with the IGF recommendations. And I call you to our booth. call you to know about this platform. And it’s our pleasure to be a volunteer for IGF leadership panel. Thank you.

Chengetai Masango: All right, thank you very much. We have one hand up there.

Audience: Thank you. Hi, can you hear me?

Chengetai Masango: At the back, you see there? Yeah.

Audience: Hi. Yeah, can you hear me? Yes. Well, I’m a first time IGF attendee and I want to find out if you would be including distributed ledger technology as part of this. I’m from Nairobi, Kenya. And one thing I’d like to appreciate is I come from the Association of Freelance Journalists and I really appreciate, I want to appreciate Carol, Vint, and the team at IGF for having the media caucus come and join the IGF for the first time. Thank you for the support that you’ve given the journalists. We really appreciate, because I guess it’s time that the IGF stories are able to be told from the inside, not from the outside looking, I mean, from the outside and everyone is just saying things. Now, the journalists will have an experience and I hope this will be the beginning of many more like, and even we’ll be able to create a caucus for the media. I’d like to let you know that we currently have the African media caucus that we started and we’ve been going on and we have several journalists in that group and we meet on WhatsApp and we get to share what’s happening in the different IGF fraternity and in the different NRIs that are happening. So I really appreciate. One thing, I don’t know if it’s too early to ask, if one of the, I saw the caucuses that you’ve already mentioned, that you talked about the civil society, private sector, and all the others, let’s have the media also be part of that caucus if it’s possible. Thank you.

Chengetai Masango: Thank you very much. And also, is it possible to have any of the online people speak?

Audience: Hi, can you hear me?

Chengetai Masango: Is it possible to have the online people speak?

Audience: Can you guys hear me? Thank you. Hello. Hi. Thank you for your time. My name is Ismail Sif from Guinea. I would like to thank you. As a first timer in IGF, I have two questions, please. First of all, countries who don’t have yet in the IGF, I would like to have information how to join IGF. That is my first question. And the second one is, is it possible to share this brief story with IGF? How can I get it if you have a website or if you can share a PDF or something like that? Thank you.

Chengetai Masango: No, thank you very much for your question. We will put it on the website so you’ll be able to find it and download it. And if you want to join a national and regional initiative, the lady sitting in front there, please just approach her and she’ll show you how. Is it possible for Nnenna to speak?

Audience: It is, I believe. Can you hear me?

Chengetai Masango: Yes. Yes, we can.

Audience: I can’t seem to activate my video, but that’s fine. Hello, everyone. My name is Nnenna. I come from the Internet. I had to wake up early in my current location to join you online, first because I’m a resource person, but because I also wanted to show you that you can fully participate from anywhere in the IGF series. I wanted to share two thoughts with you, especially for those who are joining recently. I have been participating in IGF for the past 19 years. Actually I was there before IGF started. So it is a good thing for us to listen to our incoming colleagues. I want to raise one thing. Thank you for the one who raised ethics. Over the 20 years, every improvement in IGF has been community motivated. What we have as best practice forums today was not called best practice forums earlier on. They used to call them beds of the same feather. These are people who think that an issue is important. They cuck us just like our media colleague. And they bring it up to the secretariat and we integrate it. So please, if there is a new idea, if there is a new dimension that you want to see in IGF, please do not be shy. Speak up. Find beds of the same feather, familiar spirits or kindred spirits. Cuck us together. Bring it to the fore. And you will see it materialize in IGF. There is something else. On the IGF website, you have a lot of reports. One of the things you want to do is to read a lot. Read up on who is doing what. Look at ngov.org, has a lot of resources. Reports and people. Past IGF sessions are also there. You want to find out who is doing what where and get connected to them. Most of us are online. Most of us are approachable. Most of us are resource people. Please make use of that. Finally, if you happen to be in any IGF, it is a great place to bring media, to bring attention to what you are already working on. You are meeting new people. you are meeting the people you’ve always wanted to meet. And if you are launching a book, if you’re launching a report, that will be a good place. And there are a lot of things that do not show up on the official IGF agenda. So your organization may use this to have a partners meeting, you have one-on-ones, you have a lot of other things you can do on the margin of an IGF. And that explains why most people really use that opportunity. Sometimes to even have a team meeting. If your organization is working in the digital space, that would be a good time to have your partners meeting, a good time to have even your board meeting, a good time to combine IGF with your team retreat because it cuts down your cost. So please, if you can’t do the global one, do the regional one. If you can neither do those, do the national one, but please participate. And if you can’t do it in person, you can just do it online like myself. Thank you very much.

Chengetai Masango: Thank you very much, Nnenna. And then now we have, if I can read, Mo Hippel. Sorry if I’m messing your name up, but you’re second in line online after Nnenna, please speak. If not, then we’ll go to Anja Albers.

Audience: Hi, I want to find out, I’m a first time attendee at the IGF. It’s fantastic. I’m attending in, right now I’m online, but I will be in Riyadh right in about a few hours time. And I want to find out if we can include distributed ledger technology as part of a forum, because that has a. massive parallel revenue stream or economy that’s going on and certainly requires this kind of multi-stakeholder engagement. Thank you.

Chengetai Masango: Can you just repeat if we can include one?

Audience: If we could include distributed ledger technologies such as blockchain as well as others into this forum as a discussion point because we do have, I’m also a PhD researcher, so hence the interest in this area.

Chengetai Masango: Yes, blockchain was a big issue at the IGF a couple of years ago, I think about five years ago and we had many workshop sessions on blockchain and distributed ledgers. As we mentioned before, it is quite easy to reintroduce that. We do need to have a critical mass of people who are interested in it. So, for instance, for next year’s IGF, we’ll be doing issues and if you put blockchain there and distributed ledgers and other people do so and it raises to the top or near the top, then yes, we can introduce it as a theme. But also, you yourself can have either a workshop session on blockchain, you just have to…

Audience: Sorry, I can’t hear you. Okay, there is a disturbance, please.

Chengetai Masango: I’ll try again and I think the technical people will just close off the microphones. So, we will have a call for sessions as well and you in your individual capacity can apply for a session, a workshop session. You would have to bring in some other stakeholders. two other stakeholders with you from different stakeholder groups to apply for a workshop session and have a session for next year. And the final thing you can do is to have a day zero session where the requirements aren’t that stringent as such to discuss distributed ledgers. And from that, it can help pick up the momentum for that.

Audience: Okay, that’s great. Thank you. Hello. We will hear. Yeah.

Chengetai Masango: Okay, we’ll have you and then we’ll have somebody from the. And then, unfortunately, we’ll have to close, please. If you can.

Audience: Yeah. Actually, you call me actually at the time, it was not. It was muted. So, just.

Chengetai Masango: That’s fine. Please state your question. Yeah.

Audience: Okay, so, yeah, so I’m from Montreal. And it’s almost 2am here so yeah yeah so yeah. So my question is, I’m from the technical community and I would like to know like, how can I can ICN and IGF work together to make the Internet score systems like domain names. More secure and accessible for public infrastructure. So, yeah. Yes. How could IGF and I can I can work on the technical issues. Yeah, that’s my question.

Chengetai Masango: Thank you very much. Actually, I can we work very closely with I can at this meeting, you’ll see in the opening session, just now, we will have the incoming CEO and President of I can making a speech. There’s a large contingent of I can staff and also board of directors here at the meeting, and the I give I myself goes to I can meetings as well. Thank you. meetings online, not just to discuss technical issues, but since we are somewhat similar in the way we organize our conferences, we also give each other best practices and how to do things better from our both perspectives. So to answer your specific questions, we will just continue to work with them. And also, not only in our meetings, but in the summer schools as well in our intersessional activities. Our last question, and then we have to go, sorry. Okay.

Audience: Yeah, so may I ask somebody else? Hi everybody, my name is Sanjay Jackson, Jr. I’m from Columbia, and I would like to find out if there are plans IGF has to include the Handicap Society, or is there a special section for the Handicap Society?

Chengetai Masango: All right, thank you very much. Yes, we do have actually a dynamic coalition on access and with people with disabilities as well. And we do work closely with them. They have actually given us a document which we attach to our host country agreement to make sure that all our meeting venues are accessible to people with disabilities. And you will also see in the opening ceremony and session, and in the plenary hall, we do have sign language provided as well. And it’s because of them. As Lena was saying, we didn’t have it at the beginning, but because the community asked for it, we’ve had it. And we’ll continue to have it. And of course, if there’s more to be done, please feel free to approach any of us or to send us an email with your suggestions. As we said, it’s a community effort. It’s a bottom-up process. And with that, I’m sorry we have to go, but every single one of us is very much approachable. We can have discussions in the corridors, et cetera. You can come into our offices, which are at the front there, and we’ll be happy to discuss anything you’d want. Thank you.

C

Chengetai Masango

Speech speed

127 words per minute

Speech length

4185 words

Speech time

1971 seconds

IGF established to address internet governance issues through multi-stakeholder dialogue

Explanation

The IGF was created to provide a platform for discussing internet governance issues. It was designed to be a multi-stakeholder forum where all parties could participate on equal footing.

Evidence

Kofi Annan as Secretary General initiated the World Summit on the Information Society, which led to the creation of the IGF

Major Discussion Point

History and Purpose of the Internet Governance Forum (IGF)

Agreed with

Audience

Agreed on

Multi-stakeholder approach of IGF

IGF provides a platform for stakeholders to discuss emerging internet issues and build capacity

Explanation

The IGF serves as a forum for identifying and discussing new internet-related issues. It also focuses on capacity building, especially for the global South, youth, elderly, and people with disabilities.

Evidence

The IGF has grown from 800 participants in Athens to over 8,800 registered participants currently

Major Discussion Point

History and Purpose of the Internet Governance Forum (IGF)

Agreed with

Audience

Agreed on

Multi-stakeholder approach of IGF

IGF has grown significantly in participation since its inception

Explanation

The IGF has seen substantial growth in participation over the years. This growth indicates increasing interest and relevance of the forum.

Evidence

First IGF meeting in Athens had about 800 people, while current registration is over 8,800 people

Major Discussion Point

History and Purpose of the Internet Governance Forum (IGF)

Agreed with

Audience

Agreed on

Growth and importance of IGF

IGF uses bottom-up agenda setting with community input on themes

Explanation

The IGF employs a bottom-up approach to setting its agenda. The community is invited to suggest themes for each year’s forum, ensuring relevance and inclusivity.

Evidence

Examples of past themes include AI and internet fragmentation

Major Discussion Point

Structure and Activities of the IGF

Agreed with

Audience

Agreed on

Growth and importance of IGF

IGF produces outputs like best practice forums and policy networks

Explanation

The IGF generates tangible outputs through its best practice forums and policy networks. These outputs focus on current hot topics and aim to provide practical insights and recommendations.

Evidence

Current best practice forum is on cybersecurity, and policy networks include AI, internet fragmentation, and meaningful access

Major Discussion Point

Structure and Activities of the IGF

IGF has various engagement tracks including business, parliamentary, and youth

Explanation

The IGF has developed specific tracks to engage different stakeholder groups. These tracks ensure that diverse perspectives are included in the discussions and outcomes.

Evidence

Mentions of business engagement sessions, parliamentary and judiciary tracks, and youth engagement initiatives

Major Discussion Point

Structure and Activities of the IGF

Agreed with

Audience

Agreed on

Inclusivity and accessibility of IGF

IGF has led to concrete outcomes like establishment of Internet Exchange Points

Explanation

The IGF has facilitated tangible outcomes beyond just discussions. These outcomes have had real-world impacts on internet infrastructure and accessibility.

Evidence

Example of the establishment of an Internet Exchange Point in East Africa, which reduced costs and improved local internet traffic

Major Discussion Point

Impact and Partnerships of the IGF

IGF aligns with UN Sustainable Development Goals

Explanation

The IGF’s work is aligned with the UN’s Sustainable Development Goals (SDGs). This alignment ensures that the forum’s activities contribute to broader global development objectives.

Evidence

Each IGF session is aligned with one or more SDGs

Major Discussion Point

Impact and Partnerships of the IGF

IGF partners with UN agencies, regional bodies, and private companies

Explanation

The IGF collaborates with a wide range of partners, including UN agencies, regional organizations, and private sector companies. These partnerships enhance the forum’s reach and impact.

Evidence

Mentions of partnerships with ITU, UNESCO, UNDP, African Union, EU, and companies like Google and Meta

Major Discussion Point

Impact and Partnerships of the IGF

Individuals can join IGF working groups and mailing lists

Explanation

The IGF encourages individual participation through various working groups and mailing lists. This allows for broader engagement and input from the community.

Evidence

Mentions of best practice forums, policy networks, and dynamic coalitions that individuals can join

Major Discussion Point

Participation in the IGF

National and regional IGFs allow for local participation

Explanation

The IGF has established national and regional forums to facilitate local participation. This structure allows for more context-specific discussions and engagement.

Evidence

Mention of 174 national and regional IGFs worldwide

Major Discussion Point

Participation in the IGF

A

Audience

Speech speed

139 words per minute

Speech length

1342 words

Speech time

577 seconds

Online participation enables remote engagement

Explanation

The IGF provides options for online participation, allowing individuals to engage remotely. This increases accessibility and inclusivity of the forum.

Evidence

Example of Nnenna Nwakanma participating online and encouraging others to do so

Major Discussion Point

Participation in the IGF

Agreed with

Chengetai Masango

Agreed on

Inclusivity and accessibility of IGF

Ethics should be included in IGF discussions

Explanation

An audience member suggested that ethics should be explicitly included in IGF discussions. This reflects a growing concern about ethical considerations in internet governance.

Major Discussion Point

Future Directions for the IGF

Distributed ledger technology could be a future IGF topic

Explanation

An audience member proposed including distributed ledger technology (like blockchain) as a topic for future IGF discussions. This suggestion reflects the growing importance of these technologies in the digital economy.

Major Discussion Point

Future Directions for the IGF

IGF should continue improving accessibility for people with disabilities

Explanation

An audience member inquired about IGF’s plans for including people with disabilities. This highlights the ongoing need for improving accessibility in internet governance discussions.

Major Discussion Point

Future Directions for the IGF

Agreed with

Chengetai Masango

Agreed on

Inclusivity and accessibility of IGF

Agreements

Agreement Points

Multi-stakeholder approach of IGF

Chengetai Masango

Audience

IGF established to address internet governance issues through multi-stakeholder dialogue

IGF provides a platform for stakeholders to discuss emerging internet issues and build capacity

There is a consensus on the importance of the multi-stakeholder approach in the IGF, allowing diverse groups to participate and contribute to internet governance discussions.

Growth and importance of IGF

Chengetai Masango

Audience

IGF has grown significantly in participation since its inception

IGF uses bottom-up agenda setting with community input on themes

There is agreement on the significant growth of IGF participation and its importance as a platform for discussing internet governance issues.

Inclusivity and accessibility of IGF

Chengetai Masango

Audience

IGF has various engagement tracks including business, parliamentary, and youth

Online participation enables remote engagement

IGF should continue improving accessibility for people with disabilities

There is a shared view on the importance of making IGF inclusive and accessible to various groups, including remote participants and people with disabilities.

Similar Viewpoints

Both the speaker and audience members emphasize the importance of active participation in IGF activities and working groups to contribute to internet governance discussions.

Chengetai Masango

Audience

IGF produces outputs like best practice forums and policy networks

Individuals can join IGF working groups and mailing lists

Unexpected Consensus

Importance of local and regional IGF initiatives

Chengetai Masango

Audience

National and regional IGFs allow for local participation

Online participation enables remote engagement

There was an unexpected consensus on the importance of both local/regional IGF initiatives and online participation, showing a shared understanding of the need for diverse engagement methods.

Overall Assessment

Summary

The main areas of agreement include the multi-stakeholder approach of IGF, its growth and importance, inclusivity and accessibility, active participation in IGF activities, and the value of both local/regional initiatives and online engagement.

Consensus level

There is a high level of consensus among the speakers and audience members on the core principles and functions of the IGF. This strong agreement implies a shared vision for the future of internet governance and the role of IGF in facilitating discussions and solutions to emerging issues.

Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

No significant areas of disagreement were identified in the discussion

difference_level

Low level of disagreement. The discussion was primarily informative about the IGF’s structure, activities, and goals, with speakers largely agreeing on the presented information. This implies a unified understanding and presentation of the IGF’s role and functions among the speakers.

Partial Agreements

Partial Agreements

Similar Viewpoints

Both the speaker and audience members emphasize the importance of active participation in IGF activities and working groups to contribute to internet governance discussions.

Chengetai Masango

Audience

IGF produces outputs like best practice forums and policy networks

Individuals can join IGF working groups and mailing lists

Takeaways

Key Takeaways

The Internet Governance Forum (IGF) was established to address internet governance issues through multi-stakeholder dialogue

IGF has grown significantly in participation since its inception, now attracting thousands of participants

IGF uses a bottom-up approach for agenda setting and produces outputs like best practice forums and policy networks

IGF aligns its work with the UN Sustainable Development Goals and partners with various UN agencies, regional bodies, and private companies

IGF has led to concrete outcomes like the establishment of Internet Exchange Points in some regions

There are multiple ways to participate in IGF, including joining working groups, attending national/regional IGFs, and engaging online

Resolutions and Action Items

IGF will put the presentation slides on their website for download

IGF will continue to work closely with ICANN on technical issues and best practices

IGF will seek support for its renewal in the coming year

Unresolved Issues

How to more formally include ethics discussions in IGF processes

Whether to reintroduce distributed ledger technology/blockchain as a major discussion topic

How to further improve accessibility for people with disabilities at IGF events

Suggested Compromises

For those unable to attend global IGF events, participating in regional or national IGFs was suggested as an alternative

Online participation was highlighted as an option for those who cannot attend in person

Thought Provoking Comments

I was wondering whether it’s time to include ethics as part of, you know, the overall sort of aspirations of the IGF.

speaker

Audience member

reason

This comment introduced an important new dimension – ethics – that had not been explicitly mentioned before. It challenged the IGF to consider expanding its focus areas.

impact

It prompted Chengetai Masango to explain how ethics is already implicitly part of IGF’s work, especially in areas like AI and human rights. This led to a deeper discussion of IGF’s values and priorities.

I’d like to let you know that we currently have the African media caucus that we started and we’ve been going on and we have several journalists in that group and we meet on WhatsApp and we get to share what’s happening in the different IGF fraternity and in the different NRIs that are happening.

speaker

Audience member from Kenya

reason

This comment highlighted grassroots initiatives and self-organization within the IGF community, showcasing how participants are taking ownership of the process.

impact

It drew attention to the role of media and journalists in the IGF process, leading to a suggestion to include media as an official caucus. This expanded the conversation about stakeholder groups and representation.

Over the 20 years, every improvement in IGF has been community motivated. What we have as best practice forums today was not called best practice forums earlier on. They used to call them beds of the same feather. These are people who think that an issue is important. They cuck us just like our media colleague. And they bring it up to the secretariat and we integrate it.

speaker

Nnenna

reason

This comment provided valuable historical context and emphasized the bottom-up, community-driven nature of IGF’s evolution.

impact

It encouraged new participants to actively contribute ideas and shape the future of IGF. This shifted the tone of the discussion from informational to more participatory and empowering.

I want to find out if we can include distributed ledger technology as part of a forum, because that has a massive parallel revenue stream or economy that’s going on and certainly requires this kind of multi-stakeholder engagement.

speaker

Online audience member

reason

This comment brought up a specific emerging technology area (blockchain/DLT) and made a case for its inclusion in IGF discussions.

impact

It led to an explanation from Chengetai Masango about how new topics can be introduced to IGF, including through workshop proposals. This provided practical information for participants wanting to shape the agenda.

I would like to find out if there are plans IGF has to include the Handicap Society, or is there a special section for the Handicap Society?

speaker

Sanjay Jackson, Jr.

reason

This question raised an important issue of inclusivity and accessibility, which are crucial for a truly global and representative forum.

impact

It prompted a discussion of IGF’s efforts to include people with disabilities, both in terms of physical accessibility at events and representation in discussions. This highlighted IGF’s commitment to inclusivity and responsiveness to community needs.

Overall Assessment

These key comments shaped the discussion by broadening its scope beyond the initial presentation of IGF’s history and structure. They introduced new topics (ethics, blockchain), highlighted grassroots initiatives, emphasized the community-driven nature of IGF, and raised important questions about inclusivity. This led to a more interactive and dynamic conversation that showcased IGF’s adaptability and responsiveness to participant input. The discussion evolved from a one-way informational session to a more collaborative exploration of IGF’s present and future directions.

Follow-up Questions

How to include ethics as part of the overall aspirations of the IGF

speaker

Audience member (first-timer at IGF)

explanation

The speaker suggested it may be time to explicitly include ethics in IGF’s goals, given its importance for issues like AI and internet governance.

Possibility of creating a media caucus within IGF

speaker

Audience member from Association of Freelance Journalists

explanation

The speaker suggested formalizing media participation in IGF by creating a dedicated caucus, similar to other stakeholder groups.

How countries without an IGF can join

speaker

Ismail Sif from Guinea

explanation

As a first-time attendee from a country without an IGF, the speaker wanted information on how to establish participation.

Including distributed ledger technology (blockchain) as a discussion topic

speaker

Online audience member (PhD researcher)

explanation

The speaker suggested reintroducing blockchain and distributed ledger technologies as a major topic, given their growing importance in the digital economy.

How ICANN and IGF can work together on technical issues

speaker

Online audience member from Montreal

explanation

The speaker from the technical community wanted to know how IGF and ICANN could collaborate to improve security and accessibility of core internet systems.

Plans for including the handicapped society in IGF

speaker

Sanjay Jackson, Jr. from Columbia

explanation

The speaker inquired about specific initiatives or sections dedicated to including people with disabilities in IGF activities.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

IGF 2024 Opening Ceremony

Session at a Glance

Summary

The opening session of the Internet Governance Forum (IGF) 2024 in Riyadh, Saudi Arabia focused on shaping a multi-stakeholder digital future and addressing global digital challenges. Speakers emphasized the importance of bridging digital divides, including gender gaps and disparities between developed and developing nations. They highlighted the need for affordable digital infrastructure, digital resilience, and inclusive governance mechanisms.

The adoption of the Global Digital Compact was noted as a significant milestone, recognizing the IGF as a primary platform for discussing internet governance issues. Speakers stressed the importance of ethical AI development and governance, calling for transparency, fairness, and accountability in AI systems. The potential of digital technologies to accelerate human progress was acknowledged, alongside the need for guardrails and collaborative governance approaches.

Several initiatives were announced, including Saudi Arabia’s efforts to build AI training infrastructure and UNESCO’s new Internet Universality Indicators and Guidelines for AI use in the judiciary. Speakers called for increased investment in fundamental research on network information theory and AI. The role of the multi-stakeholder model in driving internet governance progress was repeatedly emphasized.

Participants highlighted the need to address challenges such as cybersecurity threats, online hate speech, and the potential misuse of AI. They stressed the importance of protecting human rights in the digital space and ensuring that technology serves humanity. The discussion underscored the critical role of international cooperation and capacity building in achieving an inclusive, secure, and equitable digital future for all.

Keypoints

Major discussion points:

– The importance of inclusive, ethical and responsible development of digital technologies and AI to benefit all of humanity

– The need to address digital divides, including gender gaps and disparities between developed and developing countries

– The role of multi-stakeholder collaboration and governance in shaping the future of the internet and digital technologies

– The potential of digital technologies and AI to drive economic growth, innovation and sustainable development

– The importance of protecting human rights, privacy and security in the digital realm

Overall purpose:

The overall purpose of this discussion was to open the 2024 Internet Governance Forum and set the agenda for addressing key challenges and opportunities in global internet governance and the development of digital technologies. Speakers aimed to highlight the need for collaborative, inclusive approaches to shape a digital future that benefits all.

Overall tone:

The tone was largely optimistic and forward-looking, with speakers emphasizing the transformative potential of digital technologies while also acknowledging challenges that need to be addressed. There was a sense of urgency about the need to act now to shape the future of the internet and AI in positive ways. The tone remained consistent throughout, with different speakers reinforcing similar themes about collaboration, inclusion and responsible development of technology.

Speakers

– Announcer: Event host/moderator

– Li Junhua: Undersecretary General of the United Nations Department of Economic and Social Affairs

– António Guterres: UN Secretary General

– Abdullah bin Amer Alswaha: Minister of Communications and Information Technology of the Kingdom of Saudi Arabia

– Doreen Bogdan-Martin: Secretary General at International Telecommunication Union

– Krzysztof Gawkowski: Deputy Prime Minister and Minister of Digital Affairs of the Republic of Poland

– Amal El Fallah Seghrouchni: Minister of Digital Transition

– Torgeir Micaelsen: State Secretary of the Ministry of Digitalization and Public Governance at the Government of Norway

– Kurtis Lindqvist: CEO of the Internet Corporation for Assigned Names and Numbers

– Tawfik Jelassi: Assistant Director General at United Nations Educational, Scientific and Cultural Organization (UNESCO)

– Ke Gong: President of the World Federation of Engineering Organizations

– Palwasha Mohammed Zai Khan: Senator at the Senate of Pakistan

– Ivana Bartoletti: Global Chief Privacy and AI Governance Officer at Wipro

Additional speakers:

– Sarah: Character in the introductory narrative

– Father: Character in the introductory narrative

Full session report

The opening session of the Internet Governance Forum (IGF) 2024 in Riyadh, Saudi Arabia, convened high-level speakers from various sectors to discuss critical issues in internet governance and digital development, focusing on shaping a multi-stakeholder digital future and addressing global digital challenges.

Digital Inclusion and Bridging Divides

A central theme was the urgent need to address digital divides, including gender gaps and disparities between developed and developing nations. Abdullah bin Amer Alswaha, Minister of Communications and Information Technology of Saudi Arabia, emphasised closing digital, gender, and AI divides. Doreen Bogdan-Martin, Secretary General at the International Telecommunication Union, highlighted that a third of humanity remains offline, calling for targeted interventions and investment in affordable digital infrastructure and services. Ivana Bartoletti, Global Chief Privacy and AI Governance Officer at Wipro, stressed the unacceptability of the digital gender gap.

AI Governance and Ethics

The governance and ethical development of artificial intelligence (AI) emerged as a crucial topic. Alswaha called for an AI governance model addressing compute, data, and algorithmic divides. António Guterres, UN Secretary General, introduced the Global Digital Compact as a blueprint for humanity’s digital future, emphasizing that “Digital technology must serve humanity, not the other way around.” Tawfik Jelassi from UNESCO reported on the development of guidelines for the ethical use of AI in the judiciary. Bartoletti emphasized the importance of AI governance for ensuring fair, transparent, and accountable systems, also mentioning the European AI Act and Wipro’s participation in the European AI Pact.

Multi-stakeholder Approach to Internet Governance

The importance of a multi-stakeholder approach was a recurring theme. Guterres recognised the IGF as the primary multi-stakeholder platform for internet governance issues. Kurtis Lindqvist, CEO of ICANN, affirmed the proven success of the multi-stakeholder model. Torgeir Micaelsen, State Secretary of Norway’s Ministry of Digitalization and Public Governance, viewed the IGF as an opportunity to shape an inclusive digital future and announced Norway as the host for the next IGF.

Digital Transformation and Economic Development

Speakers highlighted the significant role of the digital economy in global development. Alswaha noted that the digital economy represents 15% of the global economy and highlighted Saudi Arabia’s efforts to build AI training infrastructure. Palwasha Mohammed Zai Khan, Senator at the Senate of Pakistan, reported on Pakistan’s strides towards digital transformation. Amal El Fallah Seghrouchni, Minister of Digital Transition, discussed how digital technologies are reshaping governance and service delivery, mentioning the Manhattan Declaration on inclusive global scientific understanding of artificial intelligence.

Cybersecurity and Digital Resilience

The importance of cybersecurity and digital resilience was emphasised by several speakers. Li Junhua, Undersecretary General of the UN Department of Economic and Social Affairs, highlighted the need to address challenges posed by sophisticated cyberattacks. Krzysztof Gawkowski, Deputy Prime Minister and Minister of Digital Affairs of Poland, prioritised ensuring the relevance of cybersecurity systems. Ke Gong, President of the World Federation of Engineering Organizations, pointed out the responsibility of engineers in designing resilient systems against cyber threats.

Key Initiatives and Future Actions

Several initiatives were announced during the session. UNESCO reported on new Internet Universality Indicators and Guidelines for AI use in the judiciary. Bogdan-Martin mentioned the Partner to Connect Digital Coalition and its targets. Speakers called for increased investment in fundamental research on network information theory and AI. The upcoming WSIS Plus 20 review in 2025 was highlighted as a significant milestone.

The discussion also touched on challenges such as online hate speech, the potential misuse of AI, and the threats posed by deep fakes. Protecting human rights in the digital space was a recurring concern, with speakers stressing the importance of ensuring that technology serves humanity while balancing innovation with privacy concerns.

Conclusion

The opening session of IGF 2024 set a comprehensive agenda for addressing key challenges and opportunities in global internet governance and digital technology development. While there was broad consensus on major issues such as digital inclusion, AI governance, and the multi-stakeholder approach, speakers offered varying perspectives on specific strategies and focus areas. The discussion underscored the critical role of international cooperation and capacity building in achieving an inclusive, secure, and equitable digital future for all. As the forum progresses, the focus will be on translating these high-level discussions into concrete actions and policies to bridge digital divides, ensure ethical AI development, and promote a resilient and inclusive digital ecosystem.

Session Transcript

Intro: Sarah. Father, look at the bright star. That star is Suhail, my dear. It has always guided us to rain and good fortune. I want to touch it, Father. Can I? You won’t reach it alone. I’ll help you. That is the vision of the land with the power of its nation. Was that a vision, Father? No, my child. That is reality. A reality to a connected future where we build bridges with the world. And now I’m handing the light over to you. Welcome to Saudi Arabia.

Announcer: Please welcome to the stage Mr. Lijian Hua, Undersecretary General of the United Nations Department of Economic and Social Affairs.

Li Junhua: His Excellency, Mr. Abdullah Alswaha, Minister of Communication and Information Technology of the Government of the Kingdom of Saudi Arabia. Distinguished Ministers, Excellencies, distinguished participants, I have the honor to invite the UN Secretary General, Mr. António Guterres, to deliver his video message.

António Guterres: Excellencies, I am pleased to greet the Internet Governance Forum and thank the Kingdom of Saudi Arabia for hosting this gathering. I also thank my Internet Governance Leadership Panel for their extraordinary work throughout their mandate. Dear friends, Digital technology has fundamentally reshaped our world and holds enormous potential to accelerate human progress. But unlocking this potential for all people requires guardrails and a collaborative approach to governance. In September, world leaders reached a critical milestone, the adoption of the Global Digital Compact. The Compact is the blueprint for humanity’s digital future. It’s the first comprehensive framework of its kind, based on a simple but important principle. Digital technology must serve humanity, not the other way around. And the Compact breaks new ground in three ways. First, it expands the vision of the World Summit on Information Society to not only bridge the digital divide but recognize technology as a global public good. Second, it aims to address rapidly emerging challenges that have been missing from the global digital debate, from combating hate speech and protecting vulnerable populations online to ensuring that data benefits societies instead of contributing to further concentration of economic power. And third, the Compact includes the first true universal agreement on the international governance of artificial intelligence. It commits governments to establishing an independent international scientific panel on AI and initiating a global dialogue on its governance within the United Nations. It brings all countries to the AI table and it supports efforts to build AI capacity in developing countries. Dear friends, the Global Digital Compact also recognizes the Internet Governance Forum as the primary multi-stakeholder platform for discussing Internet governance issues. As the World implements the Compact, the work and voice of your Forum will be critical. Together, let’s keep building an open, free and safe Internet for all people. And I thank you.

Li Junhua: Thank you. Thank you, Mr. Secretary General. Let me echo the Secretary General’s appreciation to the Government of the Kingdom of Saudi Arabia for its warm hospitalities in welcoming all of us and hosting this important event. The world today faces unprecedented challenges. Effective digital governance plays an important role in navigating this complex landscape. Digital technology has proven its power. It impacts us individually and as a society, affecting our economies and reshaping our future. It is critically important to ensure that the digital technologies work for the people not against them. Ladies and gentlemen, this year’s Internet Governance Forum marks the eve of the pivotal moment for global digital governance. In 2025, the United Nations General Assembly will conduct a 20-year review of the outcomes of the World Summit on Information Society. The 20-year review will provide an opportunity to align the WSIS principles and outcomes with the broader dialogue and commitments on digital governance and sustainable development, including the recently adopted Global Digital Compact. The WSIS review will also consider extending the IGF’s mandate. The IGF has now expanded from a single event to encompass 174 national or regional and youth forums. Through the community-driven platforms, the IGF tackles the key issues like cybersecurity, environmental sustainability, AI governance, human rights, gender equality, and digital infrastructure resilience, informing decision-makers worldwide. Since 2006, over 320 prominent individuals have served the IGF through its multi-stakeholder advisory group. This group has been a crucial conduit of community input, translating the will of the people into tangible preparations and outcomes. Likewise, the leadership panel in the last two years has made the IGF a stronger and more inclusive organization. land ever, ensuring our values and missions continue to drive impactful dialogue and solutions at the highest level. I’m truly proud of what we have achieved together. Through the IGF, you stand as a guardian of an accessible, affordable, safe, and resilient Internet. Together, we are working through the challenges of the rapidly changing digital landscape, such as sophisticated cyberattacks and the swift rise of generative AI, while maintaining the Internet as a force for good. As we prepare for WSIS Plus 20 in 2025, I invite all of you to unite like never before through the IGF platform to advance meaningful change. First, you will continue the efforts that are needed to ensure that IGFs bridge the digital divide by serving both developing and developed countries, building capacity across sectors and boundaries, and fostering cooperation between the global North and global South. Second, I encourage all of you to strengthen local Internet governance by supporting national and regional IGFs as multi-stakeholder forums to address local needs and inspired solutions. Finally, it is crucial that the recommendations and actions emerging from this platform support the implementation of the 2030 Agenda for Sustainable Development. Excellencies, ladies and gentlemen, the RIAD IGF presents an important historical opportunity to build on past outcomes and create a strategic roadmap for a stronger, more inclusive digital governance ecosystem. As we stand at the crossroads of the digital transformation, our action this week will shape the digital landscape for the generations to come. The challenges we face are formidable, so is our collective potential. Let us seize the moment in RIAD this week. Thank you.

Announcer: And now, it is our pleasure to invite His Excellency, Engineer Abdullah Swaha, Minister of Communications and Information Technology of the Kingdom of Saudi Arabia, to the stage.

Abdullah bin Amer Alswaha: I would like to devote my speech on, first of all, making sure, on a multilateral perspective, in a multi-stakeholder fashion, we appreciate the importance of governance and how I would argue it is one of the most fundamental levers for us to innovate together for shaping a better tomorrow. The world is talking today about internet governance, digital governance, AI governance, cyber governance. So what is governance? In very simple terms, governance goes back to the first industrial revolution, the steam engine, out of which there was a component called the governor, which basically controls power and creates balance for steam to make sure that we can benefit humanity for the greater good. But that definition is 563 years off, because it’s actually the heart of the Arab and the Muslim world during the Islamic and Arabic golden age, in which we introduced to the world gears in a system called the sapia, which is an irrigation system by al-Jazari, which basically controlled the flow of water for, once again, power and distribution of resources for the greater benefit of humanity. So why is it very critical? We are today talking about digital divide, but before we talk about that, we must zoom out and talk about the global divide and then zoom in on the way forward, talking about AI divide and the need for a new AI governance model. So let’s talk about the global divide. Globally, we have 8.2 billion population. If you look at the north and the south, 1.3 billion up north and 6.9 billion in the south. But if we look at the distribution of wealth, and let’s use global GDP as a proxy, there’s roughly $110 trillion worth of GDP output in the world. How are we doing? $45 trillion for the global north and $65 trillion for the global south. That does not seem too bad. But where’s the disparity and the divide is that when you look for per capita, it translates into $35,000 per capita up in the north and $10,000 in the south. So for every dollar being made in the global south, in the north, somebody makes $3.5. That doesn’t sound right. And it’s not a surprise that as a result of that, it’s going to take us 134 years to close down the digital divide in gender and the global gender divide. And it’s not a surprise that the global gender divide is costing humanity $7 trillion. And talking about another $7 trillion, global trade barriers today are costing us as much and the cost of inaction in climate change is $6 trillion. Mind you, that’s the size of five to six G20 nations. Let’s talk about the digital world. Are we doing any better? In the north, we have 1.1 billion people connected, 91%. Great job. In the south, we have 2.5 billion people left behind with only 4.4 billion connected. And when we’re talking about 15% of the global economy happens to be the digital economy, $15.5 trillion, how does that fare in terms of per capita execution? Once again, in the global south, it’s $1.4 thousand, and in the global north, it’s $5,000. Yet again, for every single dollar in the south, $3.5 in the north. That doesn’t seem right. And it’s not a surprise that the cost of this divide in the digital world means a third of the world being left behind. We still have $5 million shortage when it comes to the talent in cybersecurity. We have the governor of Sadda here. We have a $3 million shortage in data and AI specialists. And we have still a long way to go in terms of the gender digital divide. And this is why, in collaboration with you, the ITU, UNDESA, the Digital Cooperation Organization, and in Saudi Arabia, leading by example, we have launched initiatives like Connecting from the Skies. And I see the commissioner of CST here of how we partnered with ITU to say connecting the world from terrestrial networks is going to cost humanity half a trillion dollars. We could connect it from the skies in partnership with ITU. The Digital Cooperation Organization. representing 10% of the global population, 800 million, and I see here Dima El Yahya doing a fantastic job leaving no one behind by creating a digital future for all, and Saudi leading by example by jumping from 7% women empowerment to tech to 35% beating the Silicon Valley EU average and even the G20 average. And can I have a big round of applause for all the amazing women that we have here. You are such a role model to all of us. So let’s recap. Within the digital world we’re talking about folks being left behind, but we have to talk about the next chapter, the AI age, how we move from the digital age to the intelligence age. Is it any better? Here we spoke about a digital divide, a skills divide, a governance divide. What’s happening within the AI age? It’s projected over the next five years a billion folks will benefit and harness the benefit of the intelligence age, the AI age. But three new divides we must be able to address today, and they are the compute divide, the data divide, and the algorithmic divide. And why are they so critical? And the reason why they’re so critical because of a fundamental law that all AI models right now are adhering to called the scaling law, which in very simple terms means the more compute you have, the less noisy the model. The more data sets and tokens you have, the less noisy the model, and the more parameters and intelligence nodes and knobs, the less noisy. Think of it as painting a picture. If you have too many crayons, too many colors, and the ability to draw it perfectly, it will be less noisy. And that’s why in partnership and collaboration with you, in today’s IGF and for the next 20 years, we must agree on a governance model that is able to tackle these three challenges, the compute divide, the data divide, and the algorithmic divide. Because the cost is even so large and there’s so much at stake. We’re talking about a gap of compute capacity about 63 gigawatts where only a handful of nations can be able to deliver that. We’re talking about a 10 million shortage between data scientists, cybersecurity professionals, and AI professionals to close down the divide. And we’re talking about 7.5 billion people left behind. And we’re no longer talking about the global north or the global south. We’re talking about, if we’re talking about 8 billion people, 8 out of 10 of you will be left behind. And this is why this is relevant to all of us. And if we did not achieve multilateralism and multistakeholder in the past, we must agree on consensus in this IGF. And we need, once again, to tackle the algorithmic divide, the data divide, and the compute divide. We need an algorithm to make sure that we are helpful, honest, and harmless, to make sure that there is no bias that leaves anyone behind, or an AI or a data scientist that is inserting and hard-coding a guardrail to exclude any of us. We need to make sure that the data is accessible, accurate, and accountable, and no synthetic data that is being modeled to exclude one group versus the other. And what are we doing about that? We’re doing a lot of things in collaboration with you. Sadaia, in partnership with the UNESCO, have launched the iCare Center, how we have aligned with all of the members of the UN to make sure that AI research and ethics delivers on the honest, harmful, or harmless, and helpful AI models and algorithms for the world. How we make sure that the Digital Cooperation Organization have launched the Generative AI Center of Excellence, making sure that we leave no one behind in the global south. And we have a very loud voice. And for closing down the digital and AI gap in skill sets, we’re partnering with the ITU and UNDESA with the equals. When it comes to compute, 63 gigawatts worth of power, a handful of nations, we have a fiduciary duty to make sure that this general-purpose technology leaves no one behind. It has to be scalable. It has to be secure and robust, respecting your sovereignty and serving the world. And it has to be sustainable. It cannot add insult to injury to the $6 trillion cost of ineffective action to climate change. And this is why, in partnership with you and with the global leaders, the kingdom is leveraging its land, capital, captive market, and energy in partnership with global players like Google, Grok, and SambaNova to build one of the largest AI training and inference nodes to service humanity. And we have to move from digital public infrastructure to AI public infrastructure. Because if we take the case study for telemedicine, it’s good enough that we cut waiting times by delivering the largest virtual hospital. And I want to congratulate the Minister of Health for this achievement under the guidance and support of His Royal Highness Mohammed bin Salman, delivering 50 million virtual consultations, not for the kingdom, for the region. But the next evolution is taking the first full robotics heart transplant to be able to close down the shortage of heart surgeons around the world. And this is why it has to be digital public infrastructure with AI public infrastructure. And is Riyadh the right place to achieve it? History is a great predictor of the future. When the world in 2020 was hit with COVID, this was the capital that drove consensus to $5 trillion stimulus that moved up to $11 trillion to save the global economy. We pledged with the G20 nations $21 billion to accelerate vaccines. And we drove for the first time, not agreement, but a commitment to implementation on the OECD principles of trustworthy AI. And if you’re talking about the past couple of years, the work that we have done with UN DISA, with the UNESCO on ethics, with the DCO, with the Gen AI Center of Excellence, and as a proud member of the global community that signed on the pact of the future, the pact for the future for the global digital compact as an input parameter to the WSIS, to IGF. And this is why it gives me great honor and a pleasure to present to you from Saudi Arabia an initiative, an announcement today that we must deliver an AI model and a governance model that is inclusive, that is innovative, and impactful to close down the new divides. And with that, I would love to invite His Excellency Lee, His Excellency Sharaf, and Dima to make this historic announcement. Thank you so much.

Announcer: Please, in the middle, please. Please welcome on stage Ms. Doreen Bogdan-Martin, the Secretary General at International Telecommunication Union, to the stage.

Doreen Bogdan-Martin: Honorable Ministers, Excellencies, ladies and gentlemen, Salaam Alaikum. It’s great to be here today in Riyadh. I want to take this opportunity to thank Saudi Arabia for being such incredible hosts. And of course, I want to thank His Excellency, Minister Aswaha, and your team for making us all feel at home. Let me also congratulate Saudi Arabia on their successful bid to host the FIFA World Cup in 2034. So, ladies and gentlemen, let me start with the question, where were you in 2005? 2005. Well, Jeneline Marber was farming vanilla beans in Papua New Guinea. She had never sent an email. She had never made a video call, and she’d never used the internet. A couple of years ago, Jeneline received digital skills training from the ITU and from FAO, and today she has a thriving e-commerce business selling her vanilla beans all over the globe. Jeneline’s story is the story of millions of people. It’s the story of digital opportunities, and it’s the vision that we had 20 years ago at the World Summit on the Information Society in Tunis. I was there in 2005, and I know many of you in this room were also there. Back then, 1 billion people were connected, and here in Saudi Arabia, it was about 13 percent of the population. Of course, those numbers have changed dramatically over the past 20 years, but, ladies and gentlemen, we have to ask ourselves, how are we measuring progress? Can we accept today that 84 percent of people in high-income countries have access to 5G connectivity, while in low-income countries, it’s just 4 percent? How can we accept that the digital gender gap is actually getting bigger in least-developed countries, and how can we accept that a third of humanity is offline today? Well, for me, I can’t accept that. Here at the IGF in Riyadh, I think we have an incredible opportunity, an unmissable opportunity to strengthen the incredible collective endeavor that we started just two decades ago, and to do this, I think we have to focus on three key areas, and the first is affordability. We have to bring those costs down. Mobile Internet is 14 times more expensive in Africa than it is in Europe. The average, on average, a smartphone can cost up to 40 percent, 40 percent of a monthly income in some countries. We need investment. We really need investment in affordable digital infrastructure and services, and we need that now. That’s what the Partner to Connect Digital Coalition is all about. We have a target to get to 100 billion by 2026. We’re halfway there, and we need you to help us achieve that goal. The second focus area is digital resilience, resilience in infrastructure, resilience in governance mechanisms. This is actually something that was highlighted in the WSIS process in Action Line C2 on secure and reliable infrastructure. Digital infrastructure, including fiber optic and wireless networks, subsea cables, satellite Internet, is fundamental in helping people connect. Even so, challenges continue to escalate. Cyberattacks increase 80 percent year on year. In 2023, over 200 subsea cables were reported as damaged worldwide. And in the face of worsening climate crises, nature and natural disasters are increasingly impacting physical infrastructure. And that’s why we need to address this issue of resilience, and we have to do it through the lens of connectivity, of redundancy, security. And when I say security, I mean physical and cyber, and of course, robustness and quality. With the Global Digital Compact as a key milestone on the journey to the WSIS Plus 20 review next year, I think we have an incredible opportunity to strengthen that foundation to build a more resilient and digital future. And then the third piece, which His Excellency so eloquently described, is digital inclusion. Digital inclusion in all its dimensions, including skilling. We must move from conversation to concrete action. ITU data shows that 68 percent of the world is online, and that means, as I mentioned, a third of humanity is offline. A third of humanity is digitally excluded. Eighty-nine million more men than women are using the Internet in 2024. And in least developed countries, only 35 percent of the population has access to the Internet. Digital inclusion, again, as His Excellency so well described, is an economic imperative. It’s one that requires closing not just one divide, but several digital divides between urban and rural, older and younger populations, with people that have different abilities, gender, economic means, and also educational levels. Beijing Plus 30 review starts next year, and it’s the perfect opportunity to address that digital gender gap and to target interventions to close that gap. Because, ladies and gentlemen, when we work together, we can make real progress. Stories like Gentleman’s remind us of what’s possible. It reminds us of what is at stake. It reminds us of what’s at stake if we fail to preserve that multi-stakeholder foundation on which the Internet we want must be built. Ladies and gentlemen, look around you in this room. Look at the expertise. Look at the experience, and look at the dedication in this room where we have gathered governments, the private sector, academia, civil society, and, of course, the technical community. Think about the theme that the IGF 2024 is addressing. Building this multi-stakeholder governance is how we turn digital dreams like Gentleman’s into a reality. Our shared digital future hangs on the balance, ladies and gentlemen. So, let’s make this IGF count. Thank you very much. Shukran. Jazeera. Thank you.

Announcer: Please welcome on stage, His Excellency, Mr. Krzysztof Gawkowski, the Deputy Prime Minister and Minister of Digital Affairs of the Republic of Poland.

Krzysztof Gawkowski: Your Royal Highnesses, Your Excellencies, Excellencies, Ladies and Gentlemen, I feel extremely honoured to participate in the opening of IGF 2024. The IGF 2024 programme focuses on four key areas, each of which is essential to the ensure of the digital space evolves in an inclusive, responsible and sensible way. The Internet has become the bloodstream of the modern economic system, driving growth, levelling the playing field and connecting people around the world. It is a space that enables access to knowledge, communication, innovation, trade and cooperation and helps to solve global changes. On the one hand, the Internet opens the door to new opportunities, giving access to the resources of information that were previously inaccessible to many. On the other hand, it is also a place where previously security and equality changes arise. It is up as how we use the powerful resources. Join a first must focus on ensuring that the Internet reflects the values that are fundamental to us – openness, fairness, respect to human rights and equality. We need to ensure that the digital space is a place where freedom of expression, access to information and privacy are protected and where marginalised groups are not excluded from access to the opportunities offered by the development of technology. The Internet as a tool has great potential, but at the same time means responsible governments that ensure a balance between progress and the protection of fundamental rights. As the digital world develops, we must remember that it is up to us what values will be promoted and what the consensus of our actions in the space will be. We must work together to create an Internet that promotes equal opportunities for development and social justice. It is not just about technology, but also about our shared vision that can shape the digital future in line with the values of the global community. Poland holding the presidency of the Council of the European Union from January to June 2025, we will have a key role in shaping the digital future of Europe and the world. This will be a great responsibility for us, but also a unique opportunity to promote values that the foundation of the European Union – freedom, human rights, democracy and security. We will focus on key areas – cyber security, development of the artificial intelligence, the effective implementation of the digital regulation and the reduction of bureaucracy to support the digital transformation. We want the rights of the Internet users to be protected at all times and their data to be safe. We will strive to effectively implement the digital regulation by supporting initiatives to promote digital education. It is up to us what the future AI will be and how it will serve the society. We have the power to shape how AI will be developed. It is up to us to choose if it is a tool that benefits humanity or an area without rules of the ethical principles. We must ensure that the development of AI takes place with respect to human rights and for the common good. It is important to support initiatives to promote transparency of algorithms and ensure that AI is used in a fair, ethical and friendly way for every person. Cyber security is the foundation of effective digital policy. In the face of growing threats in cyberspace, ensuring their relevance, our system becomes a priority. Cyber security is not just one aspect of digital policy, but its foundation. Without cyber security, it is impossible to safely develop innovation, to run business activities or to provide access to public services online. For this reason, we need strong international cooperation. I am sure that we can achieve these goals only by working together. Next year, we will be extremely important from the point of view of Internet and Entry Digital Space Governance. Together, we will review the 20 years of the World Summit of Information Society and we will develop recommendations and action plans for the coming years. The renewal of the IGF mandate will also be dedicated next year. Finally, I would like to thank you for your attention and congratulate the host country for organizing this important event. I wish all participants a fruitful and rich discussion full of inspiring exchanges of view. I believe today’s meetings will always ask the most difficult questions, to which we will find wise answers. This confirms the importance of dialogue within the IGF, a dialogue that allows us to shape together a digital future based on values that unite us all. Thank you very much.

Announcer: We now invite Her Excellency Ms. Amal El Fallah, the Minister of Digital Transition for her remarks as well.

Amal El Fallah Seghrouchni: Excellency Mr. Abdullah Israd Institute of Communications, the Kingdom of Saudi Arabia. Mr. Liu Junhua, Under-Secretary General of the United Nations. Esteemed participants, ladies and gentlemen, As-salamu alaykum. I feel really honored to participate to the opening of this AGF session. Allow me first to congratulate the Kingdom of Saudi Arabia for this amazing hosting of the 19th edition of the Internet Governance Forum, an annual event organized by the United Nations bringing together global experts to discuss and shape international policies and trends in Internet Governance in a collaborative manner involving government, the private sectors, and non-profit organizations. And I seize this occasion to congratulate, once again, the Kingdom of Saudi Arabia, leadership, and people for hosting the FIFA World Cup 2034. This achievement is a significant addition to the Kingdom’s growing record of milestones achieved in various fields in line with the objective of Vision 2030. Today’s high level session topic is a crucial one as it tackles transparency and explainability in AI, a subject which concerns each one of us and all together. It is to mention that the Kingdom of Morocco is one of the first countries to announce the official implementation of UNESCO’s recommendation on the ethics of artificial intelligence, an implementation which confirms the Kingdom commitment to implementing the provision of this recommendation which aims to benefit from technology and reduce the risks associated with it. Ladies and gentlemen, Morocco is positioning itself today as a leader on the African continent in the field of artificial intelligence. Thanks to the enlightened vision of His Majesty King Mohammed VI, may God assist him, who called on the importance to be optimally leveraged the enormous development opportunity digital transition provides. Please allow me to recall some important involvement towards AI development at the global level from Moroccan point of view. Last June, for example, we organized a high level forum at the African level that provided the African consensus of Rabat as a call to action for trustworthy AI. Morocco hosts also a category two center under the auspices of UNESCO, the first of its kind in the African continent, and this center is called AI movement that I had the honor to be the president, the executive president, and now I am the honorary president of this center of category two of UNESCO. The same category two of UNESCO, I care, was accredited to Saudi Arabia in parallel and we are collaborating together in AI and ethics since that time. I would also like to recall the report issued by UNESCO last May in Rabat on the extent of Morocco’s readiness to benefit from the opportunities offered by AI. The report recalled that the Kingdom of Morocco has developed its digital ecosystem, particularly regarding communications, access to data, safe use of the Internet, and protection of personal data, which are key elements for addressing the issue of AI. And as we are addressing today AI from an ethical perspective, I must mention that Morocco has been a key player in AI international ecosystem. I recall the Manhattan Declaration on inclusive global scientific understanding of artificial intelligence, which I had the honor to be one of its signatories last September, along with 21 top AI scientists and researchers. A declaration that took place on the sidelines of the 79th session of the UN General Assembly in New York. Also Morocco has mentioned about two months ago the national digital strategy called Digital Morocco 2013-30, a strategy which has received the gracious endorsement of His Majesty the King Mohammed VI, may God assist him. This strategy encourages stakeholders to develop high value-added services and offer and based on AI, as it supports companies and startups in the field of AI operating in high value-added sectors. I’m quite sure that this panel will be a fruitful one and I hope that the discussion and exchanges will enable us to foster concrete international collaboration in a way to create a unified approach to AI ethics and regulation. Thank you very much.

Announcer: Please welcome on stage Mr. Torgeir Micaelsen, the State Secretary of the Ministry of Digitalization and Public Governance at the Government of Norway.

Torgeir Micaelsen: Excellencies, members of Parliament, distinguished delegates, ladies and gentlemen. First of all, I would like to thank the government of the Kingdom of Saudi Arabia for hosting this year’s Internet Governance Forum in this grand venue. When I see how this is accommodated, I’m convinced that IGF 2024 will turn out to be successful. The overarching theme for our deliberations here in Riyadh is building our multi-stakeholder digital future. This is indeed an appropriate and fundamental guiding principle when we together consider how to develop digital solutions and the Internet to the benefit for the global community and for the next generations. The IGF has been a stalwart advocate for an open, accessible and inclusive Internet since the first forum in 2006. The Norwegian government firmly believes that all interest parties shall be involved in the process of governing the Internet, preserving its openness and shaping its future. A close international cooperation and inclusive digital governance is key in order to connect the unconnected and release the full potential of the Internet for everyone. Because the Internet’s impact has never been more significant, it shapes the everyday life of people and businesses all over the world. It stands at the heart of our digital future. Hence, we need to work together to develop and deliver a trustworthy and safe Internet for mankind. Technology development is not without risk, including for our democracies. The current discussions on AI, the too-often practice of Internet shutdowns, as well as domestic and transnational disinformation campaigns are cases in that point. We need to establish frameworks which ensure responsible technological innovations and development, respecting human rights and privacy. Human rights are not only valid in the physical world, they must also be protected in cyberspace. After all, the Internet should be the place where all individuals can exercise their civil, political, economic, social and cultural rights. Norway remains dedicated to preserving and promoting these rights in the digital realm. Looking ahead, I furthermore emphasize that we all need to take into account sustainability and the United Nations Sustainable Development Goals when we transform societies through digitalization. The IGF can facilitate dialogue on the role of digital technologies in addressing broader sustainability challenges. We must make sure that the impact of the Internet and digital technology overall contributes positively to these important goals. Sustainability also remains one of Norway’s main priorities. We will seek innovative solutions to reduce the digital infrastructure’s environmental impact and utilize the same infrastructure to reduce greenhouse gas emissions in various sectors of society. Let us all commit to reducing the environmental footprint of our digital endeavors, working towards a greener, more sustainable digital future. The United Nations Pact for the Future and the Global Digital Compact have been presented by the UN Secretary General and successfully adopted. I notice with satisfaction that the Global Digital Compact recognizes the IGF as a primary multi-stakeholder platform for discussion of Internet government issues. Next year, the VICEs Plus 20 review will be conducted by the UN. This is an opportunity to reflect on the digital era’s achievements, challenges, and evolving needs. It is a moment to re-evaluate and to set new goals for a more inclusive, right-based, and equitable digital future. Beyond VICEs Plus 20, the IGF should remain the primary global arena for the multi-stakeholder dialogue and open, inclusive, and informed discussions on Internet governance challenges and opportunities. The IGF should continue to develop policies and practices that ensure that the Internet remains a force for positive change innovation, and global connectivity. Norway wishes to contribute to further develop the IGF as a vital and inclusive arena for all stakeholders. Next year, the IGF will be convened in Norway. On behalf of the Norwegian government, I wish you all welcome to the IGF that also marks the occasion of the Forum’s 20th anniversary, which is a pivotal moment for shaping and enhancing the multi-stakeholder dialogue for the years to come. Together here in the vibrant city of Riyadh, as well as in my home country next year, we shall strengthen diversity and collaboration through inclusive digital governance, which is crucial for a vibrant and sustainable digital ecosystem. So, let’s shape the future together. Shukran. Thank you for your attention.

Announcer: We now invite Mr. Kurtis Lindqvist, the CEO of the Internet Corporation, for a sign names and numbers.

Kurtis Lindqvist: Honorable Ministers, Excellencies, distinguished participants, colleagues, ladies and gentlemen. First of all, I’d like to thank the Kingdom of Saudi Arabia for hosting this year’s IGF, and congratulate our hosts on this very successful Forum. It’s a privilege to join you here in Riyadh at the 2024 Internet Governance Forum. The IGF remains a cornerstone of global dialogue on Internet governance, a platform where governments, civil society, business and the technical community collaborate on an equal footing. Over nearly two decades, this Forum has exemplified the strength of the multi-stakeholder model, helping to shape a resilient and inclusive Internet that benefits billions around the world. I can remain steadfast in its commitment and support to the IGF. As we approach the World Summit of Information Society, the WSIS Plus 20 Review, I’m reminded of my time as a national delegate at the 2005 WSIS in Tunis, a pivotal moment in shaping the Internet we know today. The significance of this moment cannot be overstated. Likewise, the WSIS Plus 20 Review in 2025 has the potential to influence the future of Internet governance and determine the trajectory of the multi-stakeholder model. Now more than ever, we must come together to ensure this model remains central to our efforts. We have already seen in the text of the Global Digital Compact that Member States recognize and express support for the importance of the IGF. The role of the technical community and the multi-stakeholder model. This is a good foundation for next year’s negotiations. The multi-stakeholder model has a proven track record with ample success that many in this room can attest to. During the COVID pandemic, the Internet was a lifeline for billions of people, providing access to education, healthcare, business, connection and so much more. It withstood unprecedented demand without faltering, a testament to decades of collaboration, technical resilience and shared governance. This includes the critical contribution of organizations like the Internet Engineering Task Force, whose work on technical standards has been fundamental to ensuring the Internet’s stability and growth. Beyond the pandemic, the multi-stakeholder model has driven progress across multiple dimensions of Internet governance. Take for example the strides we have made in fostering a multilingual Internet. Through efforts like internationalized domain names and universal acceptance, we have enabled people to access the Internet in their native languages and scripts, furthering inclusivity and broadening access. Looking forward, we must build on our achievements to create a future that is inclusive, equitable and accessible for all. Today, 5.6 billion are connected to the Internet, yet billions remain unconnected. Many who are online still face barriers such as affordability, accessibility and digital literacy. Innovative approaches, collaborative effort and a renewed commitment to inclusivity are required to overcome these obstacles. For the Internet to remain globally connected, secure and resilient, it is essential to include the technical community, including the organizations that safeguard and manage its critical resources components in these conversations. The IGF provides a unique opportunity to address these challenges collectively. It is a space where diverse perspectives come together to shape the Internet’s future. We can use this moment to reaffirm our commitment to the principles that have guided the Internet’s success while evolving to meet the needs of a rapidly changing digital landscape. The Internet’s success is rooted in its global accessibility, seamless interoperability and robust resilience, which is only made possible through open, collaborative governance and a single, globally coordinated system. These principles must be upheld to ensure innovation, security for all users and the continued growth and inclusivity of the Internet. The Internet is one of civilization’s greatest achievements. It connects people, drives innovation and fosters economic growth and social progress. However, its future depends on our collective actions. Let us work together to protect what makes the Internet work, its openness, global interoperability and inclusivity, and ensure that it remains a global public good and a force for innovation, economic growth and social progress. Thank you and I look forward to the important discussions this week.

Announcer: Please welcome on stage Mr. Tawfiq Jilasi, the Assistant Director General at United Nations Educational, Scientific and Cultural Organization, UNESCO. Mr. Jilasi, the Assistant Director General at United Nations Educational, Scientific and Cultural Organization, UNESCO.

Tawfik Jelassi: Excellencies, ladies and gentlemen, dear friends, peace be upon you. I would like to start by expressing his mercy. Distinguished participants, ladies and gentlemen, it’s a great privilege to address you here this morning at the 2024 edition of the Internet Governance Forum on behalf of the UNESCO Director General, Madame Audrey Azoulay. This event continues to serve as a unique multi-stakeholder platform to foster global dialogue, collaboration in order to shape the digital future that we all want to have. Let me begin by expressing our heartfelt gratitude to the host country, the Kingdom of Saudi Arabia, for graciously hosting this event and for the warm welcome. I would like also to acknowledge the tireless efforts of the IGF Secretariat for organizing this important gathering. To answer the call of the day, I would like to invite you to join me in a moment of silence. question of the ITU Secretary General, Doreen Bogdan-Martin. She asked us this morning, where were you in 2005? I can answer her question. Like many in this room, I, too, was in Tunis at the WSIS Summit as a guest speaker coming from academia at the time I was a university professor. Clearly, this year marks a very important moment for global digital governance. The Honorable Minister al-Swaha has very eloquently shared with us the many challenges that the world faces, including the digital divide. But he also talked quite convincingly about the emerging AI divide. Our gathering today, right after the adoption last September in New York of the Pact for the Future and the Global Digital Compact, our event today offers a major milestone for a bold vision for the years to come, a vision shared in the principles that we all share, human rights, openness, accessibility, and inclusivity. We believe that IGF 2024 will facilitate a collaborative implementation of the transformative agenda of the Global Digital Compact. In his opening remarks, the United Nations Secretary General, Mr. Antonio Guterres, reminded us this morning that technology should serve humanity, not the other way around. UNESCO is honored to contribute to this collective effort with two initiatives that we should launch in this event. The first initiative is the New Generation Internet Universality Indicators of UNESCO. Based on the Romex framework, ROM are standing for a human rights-based approach, O standing for an open internet to all, including through multilingualism online, and also catering to minority groups, including indigenous communities, A referring to accessibility, and M to the multi-stakeholder approach. The Internet Universality Indicators of UNESCO have already been adopted by 40 countries worldwide, and they continue to guide evidence-based policymaking and national digital assessments. The second initiative that we will unveil at this IGF meeting are the UNESCO Guidelines for the Use of AI by the Judiciary. This is grounded in the landmark 2021 recommendation on the ethics of artificial intelligence, a recommendation that is currently being implemented by 60 countries worldwide. To complement these efforts, UNESCO is working closely with a couple of its associated research centers. Minister Al-Falah Segrouchny this morning mentioned the AI Movement Center in Morocco, which is focusing on AI in Africa. The second center is the International Research Center on AI, based in Ljubljana, Slovenia, Irkai. And we are working together to develop a repository of ethical AI tools. This initiative is based on the use of open-source capabilities for the public sector, the media, and for judiciary operators, enabling stakeholders to navigate the opportunities and challenges that AI offers in the judiciary system based on the rule of law. It was clearly stated this morning, especially by the minister, that despite collective efforts, many challenges persist, including the one third of the global population that remains today offline. Women and girls, in particular, especially in underserved communities, face unique barriers with only 65% of women connected to the internet. These disparities underscore the urgent need for targeted interventions to bridge the digital divide, which is also a knowledge divide and an education divide. The statistics are equally striking. Although 93% of judicial operators are familiar with AI tools, only 9% of them report having organizational capabilities and guidelines to be trained for the ethical use of AI. To address this gap, UNESCO has been training so far over 8,000 judges, prosecutors, and judicial operators in 140 countries, empowering them to adopt AI in a responsible, ethical way in order to safeguard human rights. Ladies and gentlemen, the digital future we envision, one that is inclusive, sustainable, and human-centered, will not build itself by itself. The IGF stands as a great multi-stakeholder platform to foster collaboration and drive meaningful change. Let’s continue leveraging this unique forum to build an internet of trust, an internet that empowers us all, that bridges divides, and that advances a truly human-centered digital future. Thank you very much, and I wish you all success and success in this important global conference. Thank you.

Announcer: We now invite Mr. Ke Gong, the president of the World Federation of Engineering Organizations. Thank you.

Ke Gong: Global leaders, distinguished colleagues, ladies and gentlemen, good morning. It is my profound honor as an engineer and a researcher from China to address this esteemed gathering at the opening session of IGM 2024. The overarched scene of IGM 2024 is building our stakeholders’ digital future, resonates deeply with the mission of the World Federation of Engineering Organizations, in short, WFEO. As the largest engineering organizations globally, encompassing hundreds of national and international professional organizations, WFEO, with its millions of engineers all over the world, is at the forefront of shaping the internet’s future. At WFEO, we recognize that the internet is more than a technological marvel. It is a transformative force for social, economic, and environmental progress. Its potential to bridge the divides connecting people and foster innovation is unparalleled. However, this potential can only be fully realized if the internet remains accessible, secure, and inclusive. As engineers, we bear the responsibility to ensure that the technology we create serves the best interest of the society. This responsibility includes designing resilient systems that safeguard against cyber threats, uphold user privacy, promote digital literacy, and equitably distribute the digital benefits to all people, especially the marginalized communities. In the days ahead, we aim to contribute unique perspective and voices of engineers to policymaking and standard setting processes. particularly on the discussions about digital infrastructure, cybersecurity, and the pivotal role of engineering in achieving the United Nations Sustainable Development Goals. Taking this opportunity, as an academic researcher, I wish to highlight the fundamental importance of basic research in network information theory and intelligence theory. Just as Maxwell’s electromagnetic theory laid the groundwork for electrification, we must acknowledge that many challenges we face today stem deeply from the lack of solid, comprehensive theoretical foundation to explain the ever-evolving Internet and the sophisticated models of artificial intelligence. Therefore, it is imperative to invest more attention, more resources, and more efforts into fundamental research in this domain. All of us know, only with collective efforts, we could better develop and govern the Internet as a global resource that benefits all people and the globe. I would like to leave you with an inspired African proverb. If you want to go fast, go alone. If you want to go far, go together. Thank you.

Announcer: Please welcome to the stage, Ms. Palwasha Mohammed Zai Khan, Senator at the Senate of Pakistan.

Li Junhua: Bismillahirrahmanirrahim. Honorable parliamentarians, distinguished guests, ladies and gentlemen, assalamu alaikum. I’m deeply honored to express my profound gratitude to the United Nations Internet Governance Forum, IGF, the Interparliamentary Union, and the Shura Council of Saudi Arabia for convening this significant parliamentary track. It seeks to strengthen digital cooperation in our interconnected world. Today, digital transformation is fundamentally reshaping governance, resource allocation, service delivery, and public engagement. This evolution demands effective governance of digital technologies to ensure outcomes that are inclusive, safe, and equitable while acting as a catalyst for human resource mobilization, most importantly in developing countries, and socioeconomic development. Ladies and gentlemen, digitalization also presents profound challenges to the democratic principles and human rights, particularly within governance processes such as elections, public debate, and trust in institutions. It is imperative that we as parliamentarians and leaders move beyond merely sharing these challenges. We must make tangible commitments to address these socioeconomic issues through laws and policies that prioritize inclusivity, accountability, and people-centered outcomes, especially in the face of transnational complexities and governance gaps. Parliamentarians must strengthen the present multilateral mechanisms for governance of digital technologies and similarly extend support to countries that are lacking governance capacity. A whole-of-society approach is essential, one that collaborates with local leaders, companies, and digital innovators to develop vibrant and inclusive digital ecosystems rooted in sustainability, accountability, and rights. Ladies and gentlemen, Pakistan is making significant strides

Palwasha Mohammed Zai Khan: towards embracing the digital era through strategic initiatives and policies under its vision of Digital Pakistan. Our strategic initiatives include the Digital Pakistan Policy 2018, Cyber Security Policy of 2021, the Draft Artificial Intelligence Policy, and the Personal Data Protection Bill. These efforts are complemented by investments in infrastructure, innovation, and frameworks like the Computer Emergency Response Teams to enhance cyber resilience and foster trust in a digital landscape. Through these endeavors, Pakistan is building the foundation of an inclusive, accountable, and sustainable digital future, demonstrating how nations can position themselves as global digital leaders. Ladies and gentlemen, as parliamentarians, our collective commitment must be to collaboration, capacity building, and adherence to international standards. Together, we can bridge global governance gaps and enable an inclusive digital transformation that benefits all of humanity. In the end, allow me to say that there could not have been a more iconic setting for hosting this very important forum, such as the capital of the Kingdom of Saudi Arabia, where the fast march to information technology and development unite with deep traditionalism and the heart of religion. And this creates a beautiful fusion. I also would like to thank the Shura Council especially and the Government of the Kingdom of Saudi Arabia for allowing us a chance to visit the two holy mosques where I will head after this because of this invitation. Thank you and good luck.

Announcer: We now invite Ms. Ivana Bartoletti, the Global Chief Privacy and AI Governance Officer at Wipro. Global Chief Privacy and AI Governance Officer at Wipro

Ivana Bartoletti: Excellencies, colleagues, honourable members of parliament, ladies and gentlemen, it’s a real honour for me to be here with you today. I think we’ve said it and we’ve heard it many times this morning. We are at a watershed moment in the relationship between humanity and technology. It’s a fantastic moment for us to be in. Over the last few years, we have seen some amazing things that technology has done for us. Incredible. I have the privilege to work for a large company and I have seen how much technology does for us. Think about precision medicine. Think about tools that can help reach people with education, with medicine, with health, in places where they could have not been reached before. And think about tools that can support with personalised education, including those who have learning disabilities. And think about the potential in medicine, as was said earlier, with robotic making operations and supporting our health systems. So the potential is fantastic and we know it. And the fact that we talk about the challenges… ahead is not because we don’t love these technologies, it’s the opposite. It’s because we want them, we care, and we care about them so much that we want them to work for everybody. And this is the most important thing that we are here to deliver. How digital technologies, artificial intelligence, can work and benefit the entire of our humanity. And sometimes over the last few years we’ve also seen some pretty bad things. We’ve seen the Internet, the space that was created for us to be closer, sometimes to have too much fake news, hate speech. I was addressing the European Parliament just a couple of days ago on deep faith, and the danger they could be used for, especially in silencing women and the most vulnerable in our societies. So there are a few things, just three messages that I want to leave you with today. The first one is the digital gender gap. Look, the digital gender gap is not acceptable. Think about artificial intelligence and how that digital gender gap is related to one of the challenges that we are facing in AI, which is fair AI that does not lock people out of essential services, of loans, and of opportunities. Think about bias in artificial intelligence systems that can softwarize and perpetuate and crystallize the society as it is today, whilst we work together to work towards a more brighter future. Bias in AI is very much related to the lack of diversity and gender diversity that we have, and it’s really important that we tackle this, because if we perpetuate the existing world into decision-making around tomorrow, we’re gonna fail. So gender divide is a priority. The second one, privacy. Look, there is no contraposition, no dichotomy between privacy and innovation. I’d like a strong message to come from us here today and say that pitching privacy against innovation is a mistake, and it’s something that we must not do. Companies like mine and the private sector, we can work together in ensuring that privacy-enhancing technology can be leveraged to ensure that we safeguard the dignity of people whilst providing innovation for all. So privacy and innovation can go hand in hand, and we must consider privacy as a fundamental public good that allows anyone to feel safer, happier, and more respected in our digital space. And the third one is the governance of artificial intelligence. Look, we’ve been talking about this for a very long time now. We have the European AI Act in Europe, and Wepro is one of the 150 companies that are part of the European AI Pact. We have regulation, we have guidance, and we’ve seen massive strides, including here what was announced today, which is really, really important. Governance of AI is not nice to have. Fair, transparent AI, accountable, the possibility for people, for individuals to access meaningful information about how their data, their information is processed and used through artificial intelligence. The possibility for people to know that if they’ve been prescribed a medicine, if they had an operation with AI, they need to know where the liability is. All these are fundamentally important things to build that trust in artificial intelligence that we need if we want to innovate and transform for the public good. Transparent, fair, and accountable. The digital compact is a fantastic step because it translates what human rights means in the age of artificial intelligence. But I want to encourage us to go a little bit further and be able to work together, private sector, government, to see how we are going to bring together privacy, security, and AI, to invest in research so we can do that better, to create tools so that even smaller companies can leverage the best when it comes, for example, to privacy-enhancing technologies in AI. This is a fundamental opportunity that we have right now and I do believe, and I do believe, that the time to shape the relationship between humanity and technology is exactly now. I am delighted to be here. I look forward to this week because I think that we all have an opportunity to shape our digital ecosystem so that it brings benefits to everybody and helps towards creating a better world. Thank you.

A

Abdullah bin Amer Alswaha

Speech speed

120 words per minute

Speech length

1828 words

Speech time

908 seconds

Closing digital, gender, and AI divides is crucial

Explanation

Abdullah bin Amer Alswaha emphasizes the importance of addressing multiple divides in the digital realm. He highlights the need to close gaps in digital access, gender representation, and AI capabilities to ensure equitable development.

Evidence

He cites statistics showing disparities in digital economy per capita between global north and south, and mentions initiatives like Connecting from the Skies and the Digital Cooperation Organization.

Major Discussion Point

Digital Inclusion and Bridging Divides

Agreed with

Doreen Bogdan-Martin

Ivana Bartoletti

Palwasha Mohammed Zai Khan

Ke Gong

Agreed on

Importance of digital inclusion and bridging divides

Differed with

Doreen Bogdan-Martin

Differed on

Approach to addressing digital divides

Need for AI governance model addressing compute, data and algorithmic divides

Explanation

Alswaha argues for the development of an AI governance model that specifically addresses disparities in computing power, data access, and algorithmic capabilities. He stresses the importance of this to prevent further widening of global inequalities in the AI era.

Evidence

He mentions the compute capacity gap of 63 gigawatts, a shortage of 10 million data scientists and AI professionals, and the risk of 7.5 billion people being left behind in AI development.

Major Discussion Point

AI Governance and Ethics

Agreed with

Ivana Bartoletti

António Guterres

Tawfik Jelassi

Agreed on

Need for robust AI governance

Digital economy represents 15% of global economy

Explanation

Alswaha highlights the significant role of the digital economy in the global economic landscape. He uses this statistic to underscore the importance of digital transformation and the need for inclusive digital development.

Evidence

He states that the digital economy is worth $15.5 trillion, which represents 15% of the global economy.

Major Discussion Point

Digital Transformation and Economic Development

D

Doreen Bogdan-Martin

Speech speed

116 words per minute

Speech length

929 words

Speech time

479 seconds

A third of humanity remains offline, requiring targeted interventions

Explanation

Bogdan-Martin highlights the persistent digital divide, with a significant portion of the global population still lacking internet access. She emphasizes the need for focused efforts to address this issue and promote digital inclusion.

Evidence

She cites ITU data showing that 68% of the world is online, implying that about one-third remains offline.

Major Discussion Point

Digital Inclusion and Bridging Divides

Agreed with

Abdullah bin Amer Alswaha

Ivana Bartoletti

Palwasha Mohammed Zai Khan

Ke Gong

Agreed on

Importance of digital inclusion and bridging divides

Differed with

Abdullah bin Amer Alswaha

Differed on

Approach to addressing digital divides

Digital resilience in infrastructure and governance mechanisms is crucial

Explanation

Bogdan-Martin stresses the importance of building resilient digital infrastructure and governance systems. She argues that this is essential for maintaining connectivity and security in the face of various challenges.

Evidence

She mentions the increase in cyberattacks by 80% year on year and over 200 subsea cables reported as damaged worldwide in 2023.

Major Discussion Point

Cybersecurity and Digital Resilience

I

Ivana Bartoletti

Speech speed

113 words per minute

Speech length

891 words

Speech time

470 seconds

Digital gender gap is unacceptable and must be addressed

Explanation

Bartoletti emphasizes the urgent need to close the digital gender gap. She argues that this disparity is not only unacceptable but also has far-reaching consequences, particularly in the development and application of AI technologies.

Evidence

Major Discussion Point

Digital Inclusion and Bridging Divides

Agreed with

Abdullah bin Amer Alswaha

Doreen Bogdan-Martin

Palwasha Mohammed Zai Khan

Ke Gong

Agreed on

Importance of digital inclusion and bridging divides

Governance of AI is essential for fair, transparent and accountable systems

Explanation

Bartoletti stresses the importance of establishing robust governance frameworks for AI. She argues that this is crucial for ensuring AI systems are fair, transparent, and accountable to the public.

Evidence

She mentions the European AI Act and the European AI Pact, which Wipro is part of, as examples of efforts towards AI governance.

Major Discussion Point

AI Governance and Ethics

Agreed with

Abdullah bin Amer Alswaha

António Guterres

Tawfik Jelassi

Agreed on

Need for robust AI governance

P

Palwasha Mohammed Zai Khan

Speech speed

107 words per minute

Speech length

234 words

Speech time

131 seconds

Digitalization presents challenges to democratic principles and human rights

Explanation

Khan highlights the potential threats that digital transformation poses to democratic processes and human rights. She emphasizes the need for careful consideration of these challenges in the development of digital governance frameworks.

Evidence

She mentions specific areas of concern such as elections, public debate, and trust in institutions.

Major Discussion Point

Digital Inclusion and Bridging Divides

Agreed with

Abdullah bin Amer Alswaha

Doreen Bogdan-Martin

Ivana Bartoletti

Ke Gong

Agreed on

Importance of digital inclusion and bridging divides

Parliamentarians must strengthen multilateral mechanisms for digital governance

Explanation

Khan calls for parliamentarians to play a more active role in enhancing international cooperation on digital governance. She emphasizes the need for collaborative efforts to address transnational digital challenges.

Evidence

She mentions the need for a whole-of-society approach, collaborating with local leaders, companies, and digital innovators.

Major Discussion Point

Multi-stakeholder Approach to Internet Governance

Pakistan making strides towards digital transformation

Explanation

Khan highlights Pakistan’s efforts in embracing digital technologies and implementing relevant policies. She presents this as an example of how developing nations can position themselves in the global digital landscape.

Evidence

She mentions specific initiatives like the Digital Pakistan Policy 2018, Cyber Security Policy of 2021, Draft Artificial Intelligence Policy, and the Personal Data Protection Bill.

Major Discussion Point

Digital Transformation and Economic Development

K

Ke Gong

Speech speed

83 words per minute

Speech length

379 words

Speech time

271 seconds

Internet should remain accessible, secure and inclusive

Explanation

Gong emphasizes the importance of maintaining an open and inclusive internet. He argues that this is crucial for realizing the full potential of the internet as a tool for social and economic progress.

Major Discussion Point

Digital Inclusion and Bridging Divides

Agreed with

Abdullah bin Amer Alswaha

Doreen Bogdan-Martin

Ivana Bartoletti

Palwasha Mohammed Zai Khan

Agreed on

Importance of digital inclusion and bridging divides

Internet is transformative force for social and economic progress

Explanation

Gong highlights the significant role of the internet in driving societal and economic development. He emphasizes its potential to bridge divides and foster innovation on a global scale.

Major Discussion Point

Digital Transformation and Economic Development

Engineers responsible for designing resilient systems against cyber threats

Explanation

Gong stresses the crucial role of engineers in developing robust digital infrastructure. He argues that engineers have a responsibility to create systems that can withstand cyber threats and protect user privacy.

Major Discussion Point

Cybersecurity and Digital Resilience

Importance of investing in fundamental research on network and intelligence theory

Explanation

Gong emphasizes the need for more investment in basic research related to network information theory and intelligence theory. He argues that this foundational work is crucial for addressing current challenges in internet and AI development.

Evidence

He draws a parallel with how Maxwell’s electromagnetic theory laid the groundwork for electrification.

Major Discussion Point

AI Governance and Ethics

A

António Guterres

Speech speed

141 words per minute

Speech length

309 words

Speech time

131 seconds

Global Digital Compact provides blueprint for humanity’s digital future

Explanation

Guterres presents the Global Digital Compact as a comprehensive framework for shaping the future of digital technologies. He emphasizes its role in ensuring that digital technology serves humanity’s interests.

Evidence

He mentions that the Compact expands the vision of the World Summit on Information Society, addresses emerging challenges, and includes the first universal agreement on AI governance.

Major Discussion Point

AI Governance and Ethics

Agreed with

Abdullah bin Amer Alswaha

Ivana Bartoletti

Tawfik Jelassi

Agreed on

Need for robust AI governance

IGF is primary multi-stakeholder platform for internet governance issues

Explanation

Guterres highlights the importance of the Internet Governance Forum as the main platform for discussing internet governance. He emphasizes its role in bringing together diverse stakeholders to shape digital policies.

Evidence

He mentions that the Global Digital Compact recognizes the IGF in this role.

Major Discussion Point

Multi-stakeholder Approach to Internet Governance

Agreed with

Kurtis Lindqvist

Torgeir Micaelsen

Tawfik Jelassi

Agreed on

Multi-stakeholder approach to internet governance

Technology has potential to accelerate human progress

Explanation

Guterres emphasizes the transformative power of digital technology in advancing human development. He argues that realizing this potential requires appropriate governance and collaborative approaches.

Major Discussion Point

Digital Transformation and Economic Development

K

Kurtis Lindqvist

Speech speed

125 words per minute

Speech length

670 words

Speech time

320 seconds

Multi-stakeholder model has proven track record of success

Explanation

Lindqvist argues for the effectiveness of the multi-stakeholder approach in internet governance. He emphasizes that this model has been crucial in ensuring the internet’s resilience and growth.

Evidence

He cites the internet’s performance during the COVID-19 pandemic as an example of the success of this collaborative approach.

Major Discussion Point

Multi-stakeholder Approach to Internet Governance

Agreed with

António Guterres

Torgeir Micaelsen

Tawfik Jelassi

Agreed on

Multi-stakeholder approach to internet governance

T

Tawfik Jelassi

Speech speed

98 words per minute

Speech length

823 words

Speech time

502 seconds

UNESCO developing guidelines for ethical use of AI in judiciary

Explanation

Jelassi highlights UNESCO’s efforts in creating guidelines for the responsible use of AI in judicial systems. This initiative aims to ensure that AI applications in the judiciary adhere to ethical standards and human rights principles.

Evidence

He mentions that UNESCO has trained over 8,000 judges, prosecutors, and judicial operators in 140 countries on the ethical use of AI.

Major Discussion Point

AI Governance and Ethics

Agreed with

Abdullah bin Amer Alswaha

Ivana Bartoletti

António Guterres

Agreed on

Need for robust AI governance

Need for collaborative implementation of Global Digital Compact

Explanation

Jelassi emphasizes the importance of collective action in implementing the Global Digital Compact. He argues that this collaboration is crucial for shaping a digital future that is inclusive, sustainable, and human-centered.

Major Discussion Point

Multi-stakeholder Approach to Internet Governance

Agreed with

António Guterres

Kurtis Lindqvist

Torgeir Micaelsen

Agreed on

Multi-stakeholder approach to internet governance

T

Torgeir Micaelsen

Speech speed

99 words per minute

Speech length

706 words

Speech time

424 seconds

IGF provides opportunity to shape inclusive digital future

Explanation

Micaelsen highlights the role of the Internet Governance Forum in fostering dialogue and collaboration on digital issues. He emphasizes the importance of this platform in working towards a more inclusive and equitable digital future.

Major Discussion Point

Multi-stakeholder Approach to Internet Governance

Agreed with

António Guterres

Kurtis Lindqvist

Tawfik Jelassi

Agreed on

Multi-stakeholder approach to internet governance

A

Amal El Fallah Seghrouchni

Speech speed

113 words per minute

Speech length

672 words

Speech time

355 seconds

Digital technologies reshaping governance and service delivery

Explanation

Seghrouchni highlights the transformative impact of digital technologies on governance processes and public services. She emphasizes the need for effective governance of these technologies to ensure positive outcomes.

Major Discussion Point

Digital Transformation and Economic Development

L

Li Junhua

Speech speed

96 words per minute

Speech length

822 words

Speech time

510 seconds

Need to address challenges of sophisticated cyberattacks

Explanation

Li Junhua highlights the growing threat of advanced cyberattacks in the rapidly changing digital landscape. He emphasizes the importance of addressing these challenges to maintain a safe and resilient internet.

Major Discussion Point

Cybersecurity and Digital Resilience

K

Krzysztof Gawkowski

Speech speed

93 words per minute

Speech length

765 words

Speech time

493 seconds

Ensuring relevance of cybersecurity systems is a priority

Explanation

Gawkowski emphasizes the critical importance of maintaining up-to-date and effective cybersecurity measures. He argues that this is a key priority in the face of evolving digital threats.

Major Discussion Point

Cybersecurity and Digital Resilience

Agreements

Agreement Points

Importance of digital inclusion and bridging divides

Abdullah bin Amer Alswaha

Doreen Bogdan-Martin

Ivana Bartoletti

Palwasha Mohammed Zai Khan

Ke Gong

Closing digital, gender, and AI divides is crucial

A third of humanity remains offline, requiring targeted interventions

Digital gender gap is unacceptable and must be addressed

Digitalization presents challenges to democratic principles and human rights

Internet should remain accessible, secure and inclusive

Multiple speakers emphasized the critical need to address various digital divides, including access, gender, and AI capabilities, to ensure equitable development and protect democratic principles.

Need for robust AI governance

Abdullah bin Amer Alswaha

Ivana Bartoletti

António Guterres

Tawfik Jelassi

Need for AI governance model addressing compute, data and algorithmic divides

Governance of AI is essential for fair, transparent and accountable systems

Global Digital Compact provides blueprint for humanity’s digital future

UNESCO developing guidelines for ethical use of AI in judiciary

Speakers agreed on the importance of developing comprehensive AI governance frameworks to ensure fairness, transparency, and accountability in AI systems.

Multi-stakeholder approach to internet governance

António Guterres

Kurtis Lindqvist

Torgeir Micaelsen

Tawfik Jelassi

IGF is primary multi-stakeholder platform for internet governance issues

Multi-stakeholder model has proven track record of success

IGF provides opportunity to shape inclusive digital future

Need for collaborative implementation of Global Digital Compact

Speakers emphasized the importance of the multi-stakeholder model in internet governance, highlighting the IGF’s role and the need for collaborative efforts.

Similar Viewpoints

Both speakers highlighted the significant role of the digital economy and the need for resilient digital infrastructure to support its growth and security.

Abdullah bin Amer Alswaha

Doreen Bogdan-Martin

Digital economy represents 15% of global economy

Digital resilience in infrastructure and governance mechanisms is crucial

Both speakers emphasized the responsibility of specific groups (parliamentarians and engineers) in strengthening digital governance and security measures.

Palwasha Mohammed Zai khan

Ke Gong

Parliamentarians must strengthen multilateral mechanisms for digital governance

Engineers responsible for designing resilient systems against cyber threats

Unexpected Consensus

Importance of fundamental research in network and intelligence theory

Ke Gong

Importance of investing in fundamental research on network and intelligence theory

While most speakers focused on policy and governance issues, Ke Gong uniquely emphasized the need for investment in basic research, which could have significant implications for addressing current challenges in internet and AI development.

Overall Assessment

Summary

The main areas of agreement among speakers included the importance of digital inclusion, the need for robust AI governance, and the value of a multi-stakeholder approach to internet governance.

Consensus level

There was a high level of consensus on these key issues, suggesting a shared understanding of the critical challenges and potential solutions in global digital governance. This consensus implies a strong foundation for collaborative efforts in addressing digital divides, developing AI governance frameworks, and strengthening multi-stakeholder processes in internet governance.

Differences

Different Viewpoints

Approach to addressing digital divides

Abdullah bin Amer Alswaha

Doreen Bogdan-Martin

Closing digital, gender, and AI divides is crucial

A third of humanity remains offline, requiring targeted interventions

While both speakers emphasize the importance of addressing digital divides, Alswaha focuses on a broader range of divides including AI, while Bogdan-Martin emphasizes the need for targeted interventions to connect the unconnected.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to addressing digital divides and the focus of AI governance.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the importance of addressing digital divides, ensuring inclusive digital development, and establishing proper AI governance. The differences lie mainly in the specific aspects each speaker chooses to emphasize, rather than fundamental disagreements on the issues at hand. This suggests a generally aligned vision for the future of internet governance and digital development, which could facilitate more effective collaboration and policy-making in these areas.

Partial Agreements

Partial Agreements

Both speakers agree on the need for AI governance, but Alswaha focuses on addressing specific divides in compute, data, and algorithms, while Bartoletti emphasizes fairness, transparency, and accountability in AI systems.

Abdullah bin Amer Alswaha

Ivana Bartoletti

Need for AI governance model addressing compute, data and algorithmic divides

Governance of AI is essential for fair, transparent and accountable systems

Similar Viewpoints

Both speakers highlighted the significant role of the digital economy and the need for resilient digital infrastructure to support its growth and security.

Abdullah bin Amer Alswaha

Doreen Bogdan-Martin

Digital economy represents 15% of global economy

Digital resilience in infrastructure and governance mechanisms is crucial

Both speakers emphasized the responsibility of specific groups (parliamentarians and engineers) in strengthening digital governance and security measures.

Palwasha Mohammed Zai Khan

Ke Gong

Parliamentarians must strengthen multilateral mechanisms for digital governance

Engineers responsible for designing resilient systems against cyber threats

Takeaways

Key Takeaways

Digital inclusion and bridging divides (digital, gender, AI) is crucial for equitable development

AI governance and ethics are essential to ensure fair, transparent and accountable systems

A multi-stakeholder approach is vital for effective internet governance

Digital transformation has significant potential for economic and social progress

Cybersecurity and digital resilience are critical priorities

Resolutions and Action Items

Implement the Global Digital Compact as a blueprint for humanity’s digital future

Establish an independent international scientific panel on AI

Initiate a global dialogue on AI governance within the United Nations

Renew the IGF mandate in 2025

Conduct the 20-year review of World Summit on Information Society outcomes in 2025

Implement UNESCO’s recommendation on the ethics of artificial intelligence

Develop and implement UNESCO Guidelines for the Use of AI by the Judiciary

Unresolved Issues

Specific strategies to close the digital gender gap

Concrete plans to connect the remaining one-third of humanity that is offline

Detailed frameworks for balancing innovation with privacy protection

Specific measures to address the compute, data and algorithmic divides in AI development

Suggested Compromises

Balancing privacy and innovation through privacy-enhancing technologies

Combining traditional values with technological advancement, as exemplified by Saudi Arabia’s approach

Leveraging both digital public infrastructure and AI public infrastructure to address global challenges

Thought Provoking Comments

We are today talking about digital divide, but before we talk about that, we must zoom out and talk about the global divide and then zoom in on the way forward, talking about AI divide and the need for a new AI governance model.

speaker

Abdullah bin Amer Alswaha

reason

This comment reframes the discussion from just digital divide to a broader global divide, introducing AI as a new dimension. It’s insightful because it connects different levels of inequality and suggests a more comprehensive approach.

impact

This comment shifted the focus of the discussion from purely digital issues to broader global inequalities and the emerging challenges of AI. It set the stage for a more holistic conversation about technological governance.

We must agree on a governance model that is able to tackle these three challenges, the compute divide, the data divide, and the algorithmic divide.

speaker

Abdullah bin Amer Alswaha

reason

This comment identifies specific aspects of the AI divide, providing a framework for understanding and addressing the challenges. It’s thought-provoking because it breaks down a complex issue into manageable components.

impact

This comment provided a structure for subsequent speakers to address specific aspects of AI governance and inequality. It deepened the level of analysis by introducing concrete areas for policy focus.

Digital technology must serve humanity, not the other way around.

speaker

António Guterres

reason

This succinct statement encapsulates a fundamental principle for technology governance. It’s insightful because it places human needs at the center of technological development.

impact

This comment set a ethical foundation for the discussion, influencing subsequent speakers to consider the human impact of technological advancements.

We need investment. We really need investment in affordable digital infrastructure and services, and we need that now.

speaker

Doreen Bogdan-Martin

reason

This comment highlights the urgent need for practical action beyond policy discussions. It’s thought-provoking because it shifts the focus from theoretical governance to concrete investment needs.

impact

This comment brought a sense of urgency to the discussion and encouraged participants to consider practical steps for implementation of digital initiatives.

There is no contraposition, no dichotomy between privacy and innovation. I’d like a strong message to come from us here today and say that pitching privacy against innovation is a mistake, and it’s something that we must not do.

speaker

Ivana Bartoletti

reason

This comment challenges the common assumption that privacy and innovation are at odds. It’s insightful because it reframes the relationship between these two important aspects of digital development.

impact

This comment introduced a new perspective on the relationship between privacy and innovation, potentially changing how participants view the balance between these two priorities in policy-making.

Overall Assessment

These key comments shaped the discussion by broadening its scope from digital divide to global inequalities and AI governance, introducing specific frameworks for understanding these challenges, emphasizing human-centric approaches, highlighting the need for urgent practical action, and challenging assumptions about the relationship between privacy and innovation. They collectively moved the conversation from general principles to specific areas of focus and action, while maintaining an emphasis on ethical considerations and human impact.

Follow-up Questions

How can we address the compute divide, data divide, and algorithmic divide in AI?

speaker

Abdullah bin Amer Alswaha

explanation

These new divides in the AI age are critical to address to ensure equitable access and benefits from AI technologies globally.

How can we develop an AI governance model that is inclusive, innovative, and impactful?

speaker

Abdullah bin Amer Alswaha

explanation

A new governance model is needed to address the challenges and opportunities presented by AI on a global scale.

How can we bring down the costs of mobile internet and smartphones in developing countries?

speaker

Doreen Bogdan-Martin

explanation

Addressing affordability is crucial for bridging the digital divide and increasing internet access globally.

How can we improve digital infrastructure resilience against cyberattacks, physical damage, and climate impacts?

speaker

Doreen Bogdan-Martin

explanation

Enhancing the resilience of digital infrastructure is essential for maintaining reliable connectivity and services.

How can we effectively implement the Global Digital Compact and strengthen the multi-stakeholder foundation of internet governance?

speaker

Krzysztof Gawkowski

explanation

Implementing the compact and reinforcing multi-stakeholder governance is crucial for shaping a fair and inclusive digital future.

How can we develop and promote privacy-enhancing technologies that support both innovation and privacy protection?

speaker

Ivana Bartoletti

explanation

Balancing innovation with privacy protection is essential for building trust in digital technologies and AI systems.

How can we invest more in fundamental research in network information theory and intelligence theory?

speaker

Ke Gong

explanation

Developing a solid theoretical foundation is crucial for addressing challenges in internet and AI governance.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #188 Top Business and Technology Trends in Government for 2024

Day 0 Event #188 Top Business and Technology Trends in Government for 2024

Session at a Glance

Summary

This discussion, led by Fares Shadad from Gartner, focused on top business and technology trends in governments for 2024 and beyond. Shadad highlighted the challenges governments face, including global turmoil, cyber threats, and increasing citizen demands. A significant internal challenge is the presence of legacy systems, which are costly to upgrade. However, generative AI is emerging as a potential solution, potentially reducing modernization costs by up to 70% by 2027.

The presentation outlined several key trends. These include managing trust in digital services, institutional resilience, experience management focusing on citizen-centric solutions, and executive data advocacy. Workforce productivity is another crucial area, with AI expected to enhance human decision-making in 70% of government agencies by 2026.

From a technological perspective, Shadad discussed adaptive security, digital identity ecosystems, AI for decision intelligence, and digital platform agility. By 2026, over 70% of government agencies are predicted to use AI to enhance administrative decision-making. Cloud adoption is also accelerating, with 75% of governments expected to expand platform services for modernization by 2025.

Data management emerged as a critical theme, with more than 60% of government organizations expected to prioritize investments in business process automation by 2026. The discussion emphasized the importance of being “data ready” to accommodate technological innovations and improve government services.

Overall, the presentation highlighted the rapid technological changes governments are facing and the strategies they are adopting to meet these challenges while improving citizen services and operational efficiency.

Keypoints

Major discussion points:

– Challenges facing governments, including global turmoil, cyber threats, citizen demands, and legacy IT systems

– Business trends in government, such as managing digital trust, institutional resilience, and experience management

– Technology trends in government, including adaptive security, digital identity ecosystems, and AI for decision intelligence

– The increasing importance of data management and cloud adoption for government agencies

– The role of AI, particularly generative AI, in modernizing legacy systems and augmenting workforce productivity

Overall purpose:

The purpose of this discussion was to present and explain the top business and technology trends affecting governments in 2024 and beyond. The speaker aimed to provide insights into the challenges governments face and how emerging technologies and strategies are being used to address these challenges.

Tone:

The overall tone of the discussion was informative and engaging. The speaker maintained a conversational style throughout, occasionally asking rhetorical questions to involve the audience. The tone was consistently enthusiastic about the potential of new technologies to solve government challenges, while also acknowledging the complexities involved. There was no significant change in tone throughout the presentation.

Speakers

– Fares Shadad: Represents Gartner

Additional speakers:

None identified

Full session report

Expanded Summary of Government Technology Trends Discussion

This comprehensive summary details a presentation by Fares Shadad, a Senior Director Analyst at Gartner, on top business and technology trends in governments for 2024 and beyond. The discussion highlighted the challenges governments face, emerging solutions, and key trends shaping the future of government operations and services.

Challenges Facing Governments

Shadad emphasized that governments are grappling with multiple challenges in the current landscape:

1. External pressures: Global turmoil, cyber threats, and increasing citizen demands for improved services and quality of life.

2. Internal challenges: Legacy IT systems, which are costly and difficult to upgrade, pose a significant obstacle to modernization efforts.

3. Balancing act: Governments must navigate the complex task of addressing external pressures while meeting rising citizen expectations for better services.

These challenges underscore the need for governments to adapt quickly and embrace innovative solutions to remain effective and responsive to citizen needs. Shadad posed a rhetorical question: “If you could wish for one service from your government, what would it be?”

Legacy Systems and Modernization

A significant point emphasized in the presentation was the challenge posed by legacy systems:

1. Outdated technology: Many government agencies rely on decades-old systems that are difficult to maintain and upgrade.

2. Modernization costs: Replacing these systems is often prohibitively expensive and time-consuming.

3. Innovation barrier: Legacy systems hinder the adoption of new technologies and impede digital transformation efforts.

By 2027, generative AI is expected to be used to analyze and plan improvements for legacy systems, potentially reducing modernization costs by up to 70%.

Business Trends in Government

Several key business trends were identified as crucial for governments moving forward:

1. Managing trust in digital services: As governments adopt new technologies, building and maintaining citizen trust in digital services is paramount.

2. Institutional resilience: Governments need to develop the ability to adapt quickly to changes and unforeseen circumstances. This includes addressing potential energy constraints resulting from increased computing power needs. By 2026, some governments might experience monthly electricity rationing due to the growing energy demands of advanced technologies.

3. Experience management: There is a growing focus on citizen-centric solutions and increased citizen involvement in service design and delivery. By 2026, 87% of government CIOs are expected to increase investment in producing positive citizen experiences.

4. Executive data advocacy: Increased emphasis on data governance and management at the highest levels of government. This trend also reflects an evolving IT workforce, with an anticipated shift from data analysts to data scientists, and from data scientists to AI engineers.

5. Workforce productivity: AI augmentation is emerging as a key tool for enhancing government workforce efficiency and effectiveness.

These trends reflect a shift towards more agile, citizen-focused, and data-driven approaches to governance and public service delivery.

Technology Trends in Government

Shadad outlined several technology trends that are reshaping government operations:

1. Adaptive security: AI-driven security measures are becoming crucial for protecting government systems against evolving cyber threats. By 2028, multi-agent AI use in threat detection and incident response is predicted to increase from 5% to 70%.

2. Digital identity ecosystems: Governments are developing robust systems to authenticate online transactions securely.

3. AI for decision intelligence: By 2026, 70% of government agencies are expected to use AI to enhance human administrative decision-making and measure resulting productivity increases.

4. Digital platform agility: Cloud adoption is enabling scalability and innovation in government services. By 2025, 75% of governments are predicted to expand adoption of cloud platform services for modernization.

5. Data management and process automation: Over 60% of government organizations are expected to prioritize investments in business process automation by 2026.

These technological advancements are poised to transform how governments operate, make decisions, and deliver services to citizens.

The Role of Artificial Intelligence in Government

AI, particularly generative AI, emerged as a central theme in the discussion, with several key applications highlighted:

1. Legacy system modernization: Generative AI will be used to analyze and plan improvements for legacy systems, potentially reducing modernization costs significantly.

2. Decision-making enhancement: AI will augment human decision-making in administrative processes, improving efficiency and accuracy.

3. Cybersecurity: Multi-agent AI systems will significantly increase threat detection and incident response capabilities in government agencies.

The adoption of AI technologies presents both opportunities for improved efficiency and challenges in terms of implementation and resource management.

Conclusion

The presentation by Fares Shadad provided a comprehensive overview of the challenges and opportunities facing governments in the realm of technology adoption and digital transformation. The key takeaways emphasize the need for governments to prioritize modernization efforts, embrace AI and cloud technologies, and focus on building citizen trust in digital services.

As governments navigate these trends, they will need to balance innovation with practical challenges, such as energy consumption and workforce development. The rapid pace of technological change presents both significant opportunities and challenges, placing governments at a critical juncture in their digital evolution.

Moving forward, governments must remain adaptable, citizen-focused, and technologically progressive to meet the demands of an increasingly complex and digital world. The successful integration of these trends and technologies will be crucial in shaping the future of public service delivery and governance.

A survey mentioned at the end of the presentation further underscored the importance of these trends and their potential impact on government operations and services.

Session Transcript

Farah Shaddad: Again, on behalf of the organizers, I’d like to welcome you for this session today, where we’re talking about the top business and technology trends in governments. Okay, 2024, and probably 2025 as well, it seems like it’s trending on the same path, all right? My name is Farah Shaddad, I represent Gartner in today’s session, and I’ll be happy to walk you through a few statistics. Okay, we have some trends that we see coming, okay, that already some of it took place and probably more to come. But let me ask you this, I mean, usually I’d love to have a session to be more interactive, okay? I’d like to ask you a question, okay? Apparently it will be a rhetorical question because I will not be getting some feedback from you. If the genie just showed up, okay, and granted you wishes, but not three, only one, okay? One wish is granted to wish for your government to give you something. What is the service that you would require from your government that you feel that it is missing, that you will ask your government to have? Okay. I mean, for the audience who’s here, would you like to share something? Innovation? Okay. Technological innovations, maybe, to adapt more to technological innovation in the government and the services that the government provides, right? Okay. Focusing on security, amazing. I mean, this is very good two points to start with, but we need to remember something very important, the challenges that the government’s at, okay? I mean, look no further. I mean, if you look around you, you see the challenges are taking place around us. Let’s take a look at some of the challenges that we see around us, right? Global turmoils, cyber threats, regional conflicts, local pressures, increasing citizen demands, okay? This is some of the factors that are affecting our lives on a daily basis. Do we agree to it? Of course we do. It’s all around us, all right? One, I mean, one pressure and one challenge that governments see nowadays is basically several challenges coming from everywhere, okay? Not only from the external factors that we see here, okay? But sometimes it’s the pressure of the citizens, as we said. Citizens are asking for more products, more services. They care about the quality of life, and the government is sitting in the middle to be able to accommodate what’s happening outside, all the pressure that comes in from the outer world, and at the same time to accommodate the demand of the citizens all around. So therefore, for today’s presentation, we’re discussing the business side of the story and maybe the technical side of the story, where are the trends helping, I mean, trending and the challenges, and what the governments are doing. What are the governments are facing? One thing for a fact, okay? One of the biggest challenges that the government has, which is an internal challenge, is basically what we call in the world of IT is the legacy systems, okay? Is the legacy system a concept that is familiar to you? It’s the old computer systems, because you have a lot of organizations, I mean, a lot of government organizations, ministries, hiyat, other semi-government entities, they have been using IT for the longest time, right? So they have built so many, they invested so many money, time, resources into building their systems, all of a sudden, it became a legacy with a lot of challenges, a lot of demands on the services of the governments, so some of the systems that they have, it became some sort of old, needs to be updated, all right? One of the biggest challenges that the governments have today, how can we upgrade, okay? From our exposure to a lot of government entities all over the world, upgrading those legacy systems is not an easy task, bloody expensive. Luckily for us, we have a new technology nowadays, and we will talk more about it today, that probably will help us. All of us probably used by now, I mean, in abundance, chatGPT, right? Okay, chatGPT is an application of generative AI, so what has generative AI has to do with government legacy systems? Think about it, okay? Because of our exposure, all right, by 2027, generative AI will be used to look at the legacy systems and entities, government entities, to help the government entity identify what is this legacy system, what are the details of it, and how to plan to improve it as well. Isn’t that powerful? It’s an amazing revolution, because the alternative to that is basically to suffer, for the entity to suffer, and to spend a lot of money, and bring a lot of consultants to know the history of 30 or 40 years of the legacy systems, to be able to upgrade it, or maybe change it, or modernize it, okay, is the key word here. So by using generative AI, and helping modernizing the old legacy systems at governments, we’re saving up to 70% of the cost. So we have generative AI on one side, and we have the traditional way of modernizing our legacy systems. Gen AI is something else. Do you think this is a good takeaway, for you to know, and to take home with you? Definitely, it’s something to consider, using generative AI, and looking at your legacy systems for future enhancements, or modernization. Okay, as I said, governments are in, they are in so much demand, and they are sitting in the middle, trying to accommodate all the pressures coming from everywhere, okay? The first pressure is the demand by the citizenships, the citizens, okay? The citizens, they’re acquiring more services, they wanted a better quality of life, and they wanted more and more of the government. So that said, the government, the decision makers, the policy makers, they’re pushing the IT people, and every time I mention CIO here, we’re talking about the IT organization within the government. So they’re pushing CIOs to become more innovative, to be able to serve the citizens in a more proper, or more advanced way. The second pressure, okay, which is, how to accommodate all the pressures coming from everywhere else, which is the economical part of it, and how to ensure the quality of life based on the economical factors, and provide the citizens with, with a good quality of life. The third, okay, is basically to make things happen. All right? I mean, we know, as citizens, we know that our governments are investing in technology, investing in changing, investing in everything. Conferences, visions, what have you, it’s part of our government’s promises. But the pressure is here is to diverge and to come up to see the value on the ground. We, as citizens, we need to see the value manifest and to be taken advantage of, all right? So this is the pressure that is on governments as well to show the value, okay, to the citizens. So, as I said, we’ll tackle the subject of today from two different perspectives. I’ll share with you some numbers and some trends, what’s going on from a business perspective and a technology perspective. Okay, for trend number one, okay, managing trust and digital, all right? So probably by today, we have reached a point where we trust how to make a transaction on our cell phone, maybe to buy a plane ticket or to order our food. This is a transactional that has gained a lot of our confidence. We know that when we transfer our money from one account to another account, it is a secure transaction, 100%. How about other technologies that it’s being adopted nowadays? Let’s mention the computer vision. Let’s mention the racial profiling issues. So there are so many technologies that might affect the trust and citizenship and the citizens and taking advantage of the services or to bring the lack of trust in citizens. So we see that governments are looking on to enhancing this trust and citizen’s eyes to be able to gain more ground on adapting technologies. The second we see is the institutional resilience, okay? Institutional resilience is basically we have to have our governments ready, strong, to be able to accommodate any changes so fast, okay? To be able to accommodate this in a manner that it could recover and adapt so fast to the changes around it, whether it’s coming from citizens, economical regulations, or what have you. So the government entity, it has to be resilient, all right? So by 2026, which is basically just around the corner, okay, in a year time, we’re predicting that G20 members will experience a monthly electricity rationing, okay, basically to worry about electricity consumption. The reason of this trend is basically adopting technologies nowadays as electricity and energy demanding, okay? For a quick example, it’s basically we’re talking about cloud adoption, whether AI solutions, all of that requires a lot of computing power. A lot of computing power will require energy. So energy is one of the issues that we see that it has been trending that government entities are looking at where, how to save, and how to accommodate that, okay? Enough that we see a lot of government entities and energy and, I mean, technology providers are looking into traditional energy sources and non-traditional resources. We’re talking about sustainability of energy, okay, whether it’s renewable energy, or nowadays we started to hear back the nuclear energy as well, okay, to be able to accommodate the energy demand that the computing power needs, okay? Some technology providers are looking at building nuclear plants to be able to feed the energy. The next trend that we see is basically experience management. Experience management is basically looking at the citizens, okay? Whatever that we need to do, it has to be a customer-centric or a citizen-centric solution, or solving a citizen issue directly through the citizens. When we say citizen-centric, it’s basically, I mean, bringing the citizens to be part of the solution, and they are on the table making the decision to be able to resolve or to come up with issues, okay? So by 2026, again, 87% of the government CIOs, will increase investment in producing positive citizen experience as a critical business outcome, all right? So we’re investing more and more into bringing in the citizen to be citizens to be able to come up with solutions that serves the citizens, okay? The next, okay, executive data advocacy, which is basically, we need to focus on more, okay, on data, the power of data is where everything lies. So managing data from top, from the decision makers, okay, to filter down on how to govern and how to manage the data is very important, all right? Nowadays, we see a trend that it is where governments, they are focusing more on organizing their data, they govern the data, they’re making it ready to be able to adopt any new technological or maybe advancements that they need. So by 2027, 50% of data analysts will be trained to become, okay, data scientists. And the current data scientists, they are being trained to become AI engineers, okay? So basically, this shift is basically, is taking place because of the power of the data and how to manage data and to make sure that it is ready to be adopted, all right? So that it’s a very powerful trend here is basically people are moving, so data analytics, it became like a norm, okay? In every organization you talk to today, they have a certain level of maturity when it comes data analytics. So the next step up is basically, is how to utilize this data in a more productive and more intelligent way to be able to manage the challenges that we have mentioned earlier. Okay, workforce productivity, all right? We’re talking about government entities. The challenges that we have, all right, is basically the productivity of the workforce on such entities. Whether finding the right skills, whether upskilling, whether how to measure the productivity and how to move forward to be able to accommodate. what is demanded on the organizations and the governments and making things happen. So by 2026, 70% of the government agency will use AI to enhance human administrative decisions making and will measure the productivity increase achieving that way. So basically, we’re augmenting the workforce with AI solutions and AI agents to be able to help elevating the productivity without replacing our workforce, of course, okay? So this is something, it’s a trend that we see that it’s happening today, okay? Whereby AI will be augmenting the workforce to achieve the productivity that we’re seeking to elevating that level of productivity. Okay, with that said, okay, this is the pretty much what we see as a trend all over when it comes to business challenges, okay, in the government sector all over. Let’s look at the subject from a technological perspective. And we all know that is with technology, okay, a lot of advancement, a lot of innovation has taken place, it’s rapidly changing and the poor governments in the middle, somewhere they have to deal with it as well. So we see a trend of how to accommodate the technological advancements and moving forward. To start with, okay, the concept of adaptive security, okay, with cybersecurity is a very important aspect, okay? Big challenge to a lot of organizations, government and otherwise, all right? So the concept of adaptive security, whereby utilizing AI to enable how to predict, okay, how to manage attacks, incidents related to cyber attacks or cyber incidents, or to use, okay, to adapt with whatever attacks are coming in or incidents or possible even incidents because there is an intelligence part of it here. AI is part of the story and to be adaptive, to be able to move, to learn from whatever happens and to adapt to a better secured networks and systems as well. This is where the concept of adaptive security comes in. So by 2028, multi-agent AI and threat detection, incident response will raise from 5% to a 70%. Imagine, okay, from 5% to a 70%, AI will take place and helping, okay, with the detection and incident response, okay? Then we move to the digital identity ecosystem. Okay, we see governments all over the world, they wanted to make sure they’re taking the lead when it comes to digital identity. What is a digital identity? It’s basically, it’s the concept of, I mean, authenticating and verifying that you are who you claim that you are in your transactions, all right? Okay, you don’t have to go to any place anymore or show any ID to be able to go through a transaction. So the digital identity is a trend all over, it’s taking place just for the governments to adapt, to be able to serve the citizens, to protect its interests and services in a way. So people, I mean, as we all know, I mean, people living in Saudi Arabia, okay, and the digital identity part of the story is extremely mature. And with Apshar and Nafad is something that we see as an example. Every time this subject mentioned somewhere else, okay, they refer to Saudi Arabia’s success and when it comes to digital identity adoption and application. So by 2026, more than 500 million smartphones will handle transactions related to digital identity and transactions. So this is the prediction, more than 500 mobile smartphone will handle, again, digital identity related transactions, okay, using your digital identity. Next is basically AI for decision intelligence, okay. When it comes to AI, adoptions of AI is something, I mean, it’s trending all over the world and definitely we need governments, I mean, they have to take decisions. We need to make more intelligent decisions. That’s why we need data. Data, to make sense of data, is basically we’re bringing AI into the story to be able to help us, okay, with the prediction and to making a better decision, okay. So by 2026, over 70% of the government agencies will use AI to enhance human administrative decision making and will measure the productivity increases achieving that way. So again, bringing in AI to be able to elevate the level of decision making. Next is basically digital platform agility. When we’re talking about platforms, okay, we’re not talking about websites or applications. We’re talking about more powerful platforms that serves, okay, our purposes. I mean, today, as a government entity all over the world, okay, I have challenges whereby to be able to scale fast, to innovate. Everybody’s asking me as a government entity to be agile, to be able to accommodate the citizens’ requests, citizens’ demands, and to cope with the technology, citizens’ demands, and to cope with the security threats and cope with everything else, and to be innovative and to scale up so fast at the same time. So the digital platform agility is one of the trends that we see a lot of governments are thinking seriously and moving to the cloud, okay, and using high power generated infrastructures to be able to accommodate the need for scalability and innovation and agility as well. So 75% of government by 2025, which is next year, immediately next week, in two weeks’ time, okay, well, 75% of the government will expand the adoption of platform services for modernization, okay, with hyperscale cloud providers delivering half of the workload, okay? This is the magical world here, is basically by next year, a lot of government entities, okay, 70% of them, 75% of them, will adopt cloud providers and hyperscalers to be able, okay, to move or innovate I mean, to move their 50% of their workload to the cloud, okay, to be able to accommodate the innovation and the requirements and the agility as well, all right? Does that sound like a ring? If you are living in the kingdom, and if you’re involved in one of the wonderful measures that measures the maturity of the digital transformation, maturity of organizations in the kingdom here, is Qiyas. Qiyas is basically, one part of its mandate for next year is basically the 50% mark that we’re talking about here. The 50% mark of your workload needs to be on the cloud. So this is part of the trend, and we see it happening in front of our eyes in one of sample governments like in Saudi Arabia. Next trend that we see all over, again, the data management story. Data management story is basically whereby we have to organize ourselves to be able to make ourselves data ready to be able to accommodate any innovations, technological innovations. So when we are data ready, that means we know how to store our data, to manage it, know how to govern it, to make it available for any innovations that we have. So by 2026, more than 60% of government organization will prioritize investments in business process automation up from 35%. So currently it’s a 35% adoption. It will move up to 60% by 2026, by adopting what we call the process automation, which is basically making sure that our data is very well managed and help us in our process automation. Short and sweet, this is currently what we see as trends when it comes to business challenges and technological challenges that trends that the governments are seeing nowadays. Thank you so much for bearing with me for the past few minutes. It’s been a pleasure. One thing that is required for me to share with you is basically a survey. This is something that the organizers asked to do, is basically to fill up a survey on the data.

F

Fares Shadad

Speech speed

125 words per minute

Speech length

3320 words

Speech time

1581 seconds

Legacy systems are a major internal challenge for governments

Explanation

Legacy systems, which are old computer systems used by government organizations, pose a significant challenge. Upgrading these systems is difficult and expensive, but necessary to meet modern demands and services.

Evidence

By 2027, generative AI will be used to analyze legacy systems and help plan improvements, potentially saving up to 70% of modernization costs.

Major Discussion Point

Major Discussion Point 1: Challenges Facing Governments

Agreed with

Agreed on

Legacy systems are a major challenge for governments

Governments face pressures from global turmoil, cyber threats, and increasing citizen demands

Explanation

Governments are dealing with multiple external pressures including global conflicts, cybersecurity issues, and rising expectations from citizens. These factors create a complex environment for governments to navigate.

Evidence

Examples mentioned include global turmoils, cyber threats, regional conflicts, and increasing citizen demands for more products and services.

Major Discussion Point

Major Discussion Point 1: Challenges Facing Governments

Agreed with

Agreed on

Governments face multiple pressures and challenges

Governments need to balance external pressures with citizen demands for better services

Explanation

Governments are in a challenging position of having to manage external global pressures while also meeting the increasing demands of citizens for improved services and quality of life. This requires a delicate balance and innovative solutions.

Evidence

The speaker mentions that governments are ‘sitting in the middle’ trying to accommodate pressures from the outer world and citizen demands simultaneously.

Major Discussion Point

Major Discussion Point 1: Challenges Facing Governments

Agreed with

Agreed on

Governments face multiple pressures and challenges

Managing trust in digital services is crucial for government adoption of new technologies

Explanation

Building and maintaining citizen trust in digital services is essential for governments to successfully implement new technologies. This includes ensuring security and reliability in various digital interactions and transactions.

Evidence

Examples of technologies mentioned include computer vision and racial profiling issues, which may affect citizen trust if not managed properly.

Major Discussion Point

Major Discussion Point 2: Business Trends in Government

Institutional resilience is needed for governments to adapt quickly to changes

Explanation

Governments need to develop institutional resilience to rapidly adapt to various changes and challenges. This includes being able to recover and adjust to shifts in citizen needs, economic conditions, and regulations.

Evidence

By 2026, G20 members are predicted to experience monthly electricity rationing due to increased energy demands from technology adoption, requiring resilience and adaptation.

Major Discussion Point

Major Discussion Point 2: Business Trends in Government

Experience management focuses on citizen-centric solutions and involvement

Explanation

Governments are increasingly focusing on creating citizen-centric solutions and involving citizens in decision-making processes. This approach aims to directly address citizen issues and improve overall citizen experience.

Evidence

By 2026, 87% of government CIOs will increase investment in producing positive citizen experiences as a critical business outcome.

Major Discussion Point

Major Discussion Point 2: Business Trends in Government

Executive data advocacy is increasing focus on data governance and management

Explanation

There is a growing trend of executive-level focus on data governance and management in government organizations. This involves organizing, governing, and preparing data for adoption of new technologies and advancements.

Evidence

By 2027, 50% of data analysts will be trained to become data scientists, and current data scientists are being trained to become AI engineers.

Major Discussion Point

Major Discussion Point 2: Business Trends in Government

Workforce productivity is being enhanced through AI augmentation

Explanation

Governments are using AI to augment their workforce and increase productivity. This involves implementing AI solutions to assist human workers rather than replace them, helping to address skills gaps and improve efficiency.

Evidence

By 2026, 70% of government agencies will use AI to enhance human administrative decision-making and measure the resulting productivity increase.

Major Discussion Point

Major Discussion Point 2: Business Trends in Government

Agreed with

Agreed on

AI is becoming crucial for government operations and decision-making

Adaptive security using AI is becoming crucial for cybersecurity

Explanation

The concept of adaptive security, which utilizes AI to predict, manage, and respond to cyber attacks and incidents, is becoming increasingly important for government cybersecurity. This approach allows for continuous learning and adaptation to new threats.

Evidence

By 2028, multi-agent AI in threat detection and incident response is predicted to increase from 5% to 70%.

Major Discussion Point

Major Discussion Point 3: Technology Trends in Government

Digital identity ecosystems are being developed to authenticate online transactions

Explanation

Governments worldwide are taking the lead in developing digital identity ecosystems. These systems aim to authenticate and verify individual identities for online transactions without the need for physical identification.

Evidence

By 2026, more than 500 million smartphones will handle transactions related to digital identity. Saudi Arabia is mentioned as a successful example of digital identity adoption.

Major Discussion Point

Major Discussion Point 3: Technology Trends in Government

AI for decision intelligence is enhancing administrative decision-making

Explanation

Governments are increasingly adopting AI to enhance decision-making processes in administrative tasks. This trend aims to improve the quality and efficiency of decisions made by government agencies.

Evidence

By 2026, over 70% of government agencies will use AI to enhance human administrative decision-making and measure the resulting productivity increases.

Major Discussion Point

Major Discussion Point 3: Technology Trends in Government

Agreed with

Agreed on

AI is becoming crucial for government operations and decision-making

Digital platform agility through cloud adoption is enabling scalability and innovation

Explanation

Governments are adopting cloud-based digital platforms to increase agility, scalability, and innovation capabilities. This trend allows government entities to respond more quickly to citizen demands and technological changes.

Evidence

By 2025, 75% of governments will expand the adoption of platform services for modernization, with hyperscale cloud providers delivering half of the workload.

Major Discussion Point

Major Discussion Point 3: Technology Trends in Government

Data management and process automation are priorities for government organizations

Explanation

Government organizations are prioritizing investments in data management and business process automation. This focus aims to make data more accessible and usable for innovations and to streamline operations.

Evidence

By 2026, more than 60% of government organizations will prioritize investments in business process automation, up from 35% currently.

Major Discussion Point

Major Discussion Point 3: Technology Trends in Government

Generative AI will be used to modernize legacy systems, saving up to 70% of costs

Explanation

Generative AI is expected to play a significant role in modernizing government legacy systems. This technology can help identify and analyze old systems, assisting in planning improvements and upgrades more efficiently.

Evidence

By 2027, generative AI will be used to analyze legacy systems in government entities, potentially saving up to 70% of modernization costs compared to traditional methods.

Major Discussion Point

Major Discussion Point 4: Role of Artificial Intelligence in Government

Agreed with

Agreed on

AI is becoming crucial for government operations and decision-making

AI will be used to enhance human decision-making in government agencies

Explanation

Government agencies are increasingly adopting AI to support and improve human decision-making processes. This trend aims to increase productivity and efficiency in administrative tasks without replacing human workers.

Evidence

By 2026, 70% of government agencies will use AI to enhance human administrative decision-making and will measure the resulting productivity increases.

Major Discussion Point

Major Discussion Point 4: Role of Artificial Intelligence in Government

Multi-agent AI will significantly increase threat detection and incident response capabilities

Explanation

The use of multi-agent AI systems in cybersecurity is expected to greatly improve threat detection and incident response capabilities for governments. This technology will enable more adaptive and intelligent security measures.

Evidence

By 2028, the use of multi-agent AI in threat detection and incident response is predicted to increase from 5% to 70%.

Major Discussion Point

Major Discussion Point 4: Role of Artificial Intelligence in Government

Agreements

Agreement Points

Legacy systems are a major challenge for governments

Fares Shadad

Legacy systems are a major internal challenge for governments

The speaker emphasizes that legacy systems pose a significant challenge for government organizations, requiring expensive and difficult upgrades to meet modern demands.

Governments face multiple pressures and challenges

Fares Shadad

Governments face pressures from global turmoil, cyber threats, and increasing citizen demands

Governments need to balance external pressures with citizen demands for better services

The speaker highlights that governments are dealing with various external pressures while also trying to meet increasing citizen demands for improved services and quality of life.

AI is becoming crucial for government operations and decision-making

Fares Shadad

Workforce productivity is being enhanced through AI augmentation

AI for decision intelligence is enhancing administrative decision-making

Generative AI will be used to modernize legacy systems, saving up to 70% of costs

The speaker presents multiple arguments supporting the increasing importance of AI in government operations, from enhancing workforce productivity to improving decision-making and modernizing legacy systems.

Similar Viewpoints

The speaker emphasizes the importance of building trust in digital services and developing robust digital identity systems for successful government technology adoption.

Fares Shadad

Managing trust in digital services is crucial for government adoption of new technologies

Digital identity ecosystems are being developed to authenticate online transactions

The speaker highlights the need for governments to be adaptable and resilient, with cloud adoption and digital platforms playing a key role in achieving this agility.

Fares Shadad

Institutional resilience is needed for governments to adapt quickly to changes

Digital platform agility through cloud adoption is enabling scalability and innovation

Unexpected Consensus

Overall Assessment

Summary

The presentation by Fares Shadad focuses on key business and technology trends in governments, emphasizing the challenges of legacy systems, the need for balancing various pressures, the importance of AI adoption, and the crucial role of digital trust and identity systems.

Consensus level

As this is a monologue by a single speaker, there is no consensus to assess among multiple speakers. However, the speaker presents a coherent and consistent view of the challenges and trends facing governments in terms of technology adoption and digital transformation. The implications suggest that governments need to prioritize modernization efforts, embrace AI and cloud technologies, and focus on building citizen trust in digital services to meet future challenges effectively.

Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

No disagreements identified as there is only one speaker presenting information.

difference_level

None. The transcript contains a single perspective without opposing viewpoints.

Partial Agreements

Partial Agreements

Similar Viewpoints

The speaker emphasizes the importance of building trust in digital services and developing robust digital identity systems for successful government technology adoption.

Fares Shadad

Managing trust in digital services is crucial for government adoption of new technologies

Digital identity ecosystems are being developed to authenticate online transactions

The speaker highlights the need for governments to be adaptable and resilient, with cloud adoption and digital platforms playing a key role in achieving this agility.

Fares Shadad

Institutional resilience is needed for governments to adapt quickly to changes

Digital platform agility through cloud adoption is enabling scalability and innovation

Takeaways

Key Takeaways

Governments face significant challenges including legacy systems, external pressures, and increasing citizen demands

Key business trends for governments include managing digital trust, institutional resilience, citizen-centric experience management, data advocacy, and AI-enhanced workforce productivity

Major technology trends in government include adaptive security, digital identity ecosystems, AI for decision intelligence, cloud-based digital platform agility, and data management/process automation

Artificial Intelligence, especially generative AI, will play a crucial role in modernizing legacy systems, enhancing decision-making, and improving cybersecurity in government

Resolutions and Action Items

By 2027, generative AI will be used to modernize legacy systems in government entities, potentially saving up to 70% of costs

By 2026, 87% of government CIOs will increase investment in producing positive citizen experiences

By 2026, 70% of government agencies will use AI to enhance human administrative decision-making

By 2025, 75% of governments will expand adoption of cloud platform services for modernization

Unresolved Issues

Specific strategies for balancing external pressures with increasing citizen demands

Detailed plans for addressing potential energy constraints due to increased computing power needs

Concrete steps for transitioning data analysts to data scientists and data scientists to AI engineers

Suggested Compromises

None identified

Thought Provoking Comments

By 2027, generative AI will be used to look at the legacy systems and entities, government entities, to help the government entity identify what is this legacy system, what are the details of it, and how to plan to improve it as well.

speaker

Fares Shadad

reason

This comment introduces a novel application of generative AI in government systems, highlighting its potential to revolutionize the modernization of legacy systems.

impact

It shifted the discussion towards the practical applications of AI in government, particularly in addressing the long-standing challenge of legacy systems. This opened up a new perspective on how emerging technologies can solve traditional problems in government IT infrastructure.

By 2026, which is basically just around the corner, okay, in a year time, we’re predicting that G20 members will experience a monthly electricity rationing, okay, basically to worry about electricity consumption.

speaker

Fares Shadad

reason

This prediction highlights an unexpected consequence of technological advancement – increased energy consumption – and its potential impact on government policies and infrastructure.

impact

It broadened the scope of the discussion to include environmental and resource management concerns in the context of technological advancement, linking IT trends to broader societal challenges.

By 2026, 87% of the government CIOs, will increase investment in producing positive citizen experience as a critical business outcome

speaker

Fares Shadad

reason

This comment emphasizes the growing importance of citizen-centric approaches in government services, reflecting a shift in priorities for government IT leaders.

impact

It steered the conversation towards the importance of user experience in government services, highlighting a trend towards more citizen-focused governance and technology implementation.

By 2026, 70% of the government agency will use AI to enhance human administrative decisions making and will measure the productivity increase achieving that way.

speaker

Fares Shadad

reason

This prediction illustrates the expected widespread adoption of AI in government decision-making processes, emphasizing the augmentation rather than replacement of human workers.

impact

It deepened the discussion on AI’s role in government, moving from general applications to specific use cases in administrative decision-making and productivity enhancement.

By 2028, multi-agent AI and threat detection, incident response will raise from 5% to a 70%.

speaker

Fares Shadad

reason

This dramatic increase in AI adoption for cybersecurity highlights the rapid pace of technological change and the growing importance of AI in protecting government systems.

impact

It shifted the focus to the critical area of cybersecurity, emphasizing how AI is expected to play a transformative role in this domain within a relatively short timeframe.

Overall Assessment

These key comments shaped the discussion by highlighting several critical trends in government technology adoption, including the use of AI for legacy system modernization, citizen-centric service design, administrative decision-making, and cybersecurity. The comments consistently emphasized the rapid pace of technological change and its wide-ranging impacts on government operations, citizen experiences, and resource management. They also underscored the need for governments to be proactive and adaptive in their approach to technology, balancing innovation with practical challenges like energy consumption and workforce productivity. Overall, these insights painted a picture of governments at a technological crossroads, facing both significant opportunities and challenges in the near future.

Follow-up Questions

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative

High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative

Session at a Glance

Summary

This discussion focused on defining and implementing transparency and explainability in AI systems, as well as balancing innovation with ethical governance. Participants from various countries and organizations shared their perspectives on these challenges.

Key points included the need for globally agreed definitions of transparency and explainability, with transparency relating to how AI systems are designed and deployed, while explainability concerns justifying AI decisions. Several speakers emphasized the importance of standards and frameworks to guide ethical AI development, with examples given from Saudi Arabia, Morocco, and international bodies such as ITU and UNESCO.

The discussion highlighted both the potential of AI to accelerate progress on sustainable development goals and address global challenges, as well as technical and non-technical barriers to achieving transparent and explainable AI. These barriers include the complexity of AI models, data privacy concerns, and the need for more AI expertise and public understanding.

Participants agreed on the need to prioritize trust, safety, and accountability in AI governance moving forward. Suggestions for future action included focusing on frugal and inclusive AI development, enhancing global collaboration, supporting capacity building in the Global South, and closing digital divides. The importance of considering cultural and linguistic diversity in AI development was also stressed.

The discussion concluded with calls to create human-centric AI systems that benefit humanity while addressing ethical concerns and potential risks. Participants emphasized the need for ongoing dialogue and cooperation among all stakeholders to shape responsible AI governance and harness AI’s potential for sustainable development.

Keypoints

Major discussion points:

– Defining transparency and explainability in AI, and their importance for building trust

– National and international efforts to promote ethical AI development and use

– Challenges and barriers to implementing transparent and explainable AI systems

– Leveraging AI to achieve sustainable development goals and address global challenges

– Priorities and actions needed to advance responsible AI governance by 2025 and beyond

The overall purpose of the discussion was to explore how different stakeholders define and approach transparency and explainability in AI, examine real-world examples and challenges, and identify priorities for advancing responsible AI governance and development globally.

Speakers

– Latifa Al-Abdulkarim, Assistant Professor of Computer Science, King Saud University (Moderator)

– Gong Ke, Executive Director of the Chinese Institute for the New Generation Artificial Intelligence Development Strategies, Chinese Academy of Engineering

– Doreen Bogdan-Martin, Secretary-General of the International Telecommunication Union (ITU)

– His Excellency Dr. Abdullah bin Sharaf Alghamdi, President of the Saudi Data & AI Authority (SDAIA), Kingdom of Saudi Arabia

– Amal El Fallah Seghrouchni, Executive President of the International Center of Artificial Intelligence of Morocco, Ai movement within the Mohammed VI Polytechnic University

– Li Junhua, United Nations Secretary-General

– His Excellency Abdullah bin Amer Alswaha, Minister of Communications & Information Technology, Kingdom of Saudi Arabia

Full session report

Expanded Summary of AI Transparency and Explainability Discussion

Introduction:

This discussion, moderated by Latifa Al-Abdulkarim, brought together experts from various countries and organizations to explore the challenges and opportunities surrounding transparency and explainability in artificial intelligence (AI) systems. The conversation focused on defining these concepts, examining their importance in building trust, and identifying priorities for advancing responsible AI governance globally.

Key Definitions and Concepts:

A crucial starting point for the discussion was establishing clear definitions of transparency and explainability in AI. Doreen Bogdan-Martin, representing the International Telecommunication Union (ITU), provided a helpful distinction: transparency relates to how AI systems are designed and deployed, while explainability concerns justifying AI decisions. Amal El Fallah Seghrouchni, Executive President of the International Center of Artificial Intelligence of Morocco stated that it’s important to justify the decision given by the system for better explainability.

National and International Efforts:

Participants shared insights into various initiatives aimed at promoting ethical AI development and use:

1. Saudi Arabia: His Excellency Dr. Abdullah bin Sharaf Alghamdi, President of the Saudi Data & AI Authority (SDAIA), highlighted the country’s development of national AI ethics frameworks and initiatives. He also mentioned Saudi Arabia’s collaboration with international organizations such as ITU, OECD, and ISESCO in AI governance efforts.

2. China: Gong Ke, Executive Director of the Chinese Institute for the New Generation Artificial Intelligence Development Strategies, Chinese Academy of Engineering, mentioned steps being taken to promote responsible AI deployment, including the concept of “double increases and double decreases” in AI development.

3. Morocco: Amal El Fallah Seghrouchni discussed Morocco’s efforts in AI, particularly addressing the challenges posed by linguistic diversity. She highlighted the country’s three languages and the complexities this presents for inclusive AI development.

4. International bodies: Doreen Bogdan-Martin discussed ITU’s collaboration with partners like IEC, ISO, IEEE, and IETF through the World Standards Cooperation (WSC) group, focusing on multimedia authentication, deepfakes, and misinformation. She also mentioned the development of an AI readiness framework in collaboration with ITU and the launch of the “Green Digital Action” and the COP29 Declaration on Green Digital Action.

5. United Nations: Li Junhua, United Nations Secretary-General, highlighted the UN’s efforts in AI governance, including the formation of an interagency working group on AI.

Technical Barriers and Challenges:

Several speakers identified key challenges in implementing transparent and explainable AI systems:

1. Complexity: Amal El Fallah Seghrouchni noted that the complexity of AI models makes them difficult to explain, particularly deep learning systems.

2. Data privacy: Gong Ke highlighted data privacy concerns as a challenge for transparency.

3. Regulatory gaps: Amal El Fallah Seghrouchni pointed out that regulations struggle to keep pace with rapid AI advancements, emphasizing the need for flexible regulatory frameworks.

4. Talent shortage: The lack of AI expertise was identified as a major barrier to implementation.

5. Linguistic diversity: Amal El Fallah Seghrouchni raised the issue of language diversity posing challenges for inclusive AI development, citing the example of Morocco’s three languages.

Leveraging AI for Sustainable Development:

Participants emphasised the potential of AI to accelerate progress on sustainable development goals (SDGs) and address global challenges:

1. Doreen Bogdan-Martin stated that AI could accelerate progress on SDGs by 70%.

2. Li Junhua highlighted AI’s ability to enable real-time data analysis for policymaking, address structural inequalities, aid disaster response, and help with climate prediction and resource mobilisation.

Priorities for Future AI Governance:

As the discussion progressed, speakers proposed several priorities for advancing responsible AI governance:

1. Trust, safety, and accountability: His Excellency Dr. Abdullah bin Sharaf Alghamdi emphasised the need to focus on these aspects alongside collaboration.

2. Frugal, trustworthy, and inclusive AI: Amal El Fallah Seghrouchni advocated for this approach to AI development, emphasizing the concept of “doing more with less.”

3. Global collaboration: Li Junhua stressed the importance of cooperation among all stakeholders.

4. Closing digital and AI gaps: Doreen Bogdan-Martin highlighted this as a priority, particularly for developing regions.

5. Capacity building: Gong Ke emphasised the need to build engineering capacity, especially in developing regions, mentioning the World Federation of Engineering Organizations’ 10-year engineering capacity building program for Africa.

6. Standards development: Doreen Bogdan-Martin stressed the importance of standards in AI development to ensure interoperability and responsible practices.

Data Quality vs. Quantity:

The discussion also focused on the approach to data in AI development. While some speakers implied the need for extensive data to leverage AI’s potential, Amal El Fallah Seghrouchni challenged this notion, advocating for focused, high-quality datasets over large quantities of potentially unreliable data.

Conclusion:

The discussion concluded with a call for creating human-centric AI systems that benefit humanity while addressing ethical concerns and potential risks. Participants emphasised the need for ongoing dialogue and cooperation among all stakeholders to shape responsible AI governance and harness AI’s potential for sustainable development.

Several thought-provoking questions were raised for future consideration, including the validity of the Turing test for modern AI systems, the development of context-specific metrics for explainability and transparency, and strategies for creating more frugal, trustworthy, and inclusive AI systems.

Overall, the discussion highlighted the complex challenges and significant opportunities presented by AI technology. While there was broad consensus on the importance of transparency, explainability, and responsible development, the specific approaches to addressing these challenges may vary based on regional contexts and priorities. This underscores the need for continued international collaboration and dialogue to shape the future of AI governance.

Session Transcript

Latifa Al-Abdulkarim: I will go first to describe the general theme of this interesting session. So in this session, we want to know how AI actors, users, and regulators define transparency and explainability in the context of AI. And is this definition a consensus definition? While going through some real-world examples to show the significance of using transparency and explainability, we also want to dig into the technical and other challenges that make AI systems hard to explain. And since we have a very interesting diverse group here, moving from national to regional and global perspectives, we want to discuss the regulatory roles, the shortcomings in the roles, as well as the improvements that we want to achieve, foster international collaboration, and encourage digital dialogue on the roles and expectations from different stakeholders. Finally, this is a question from me. I want to ask ourselves whether the Turing test for AI is still valid for today, or we need a different version to trust, a new trust version for the Turing test to check whether we have trustworthy AI systems or not. Hopefully, some ideas will come from the IGF here in Riyadh. So let’s dive right in. And I will start with you, Doreen. As ITU plays a pivotal role in setting global standards for technologies, how should the term transparency and explainability in the context of AI be defined? And how to promote specifically transparency and explainability in those standards, which is, I know, a very challenging topic. Please. Thank you.

Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a member of HLAB, you know, it’s worth highlighting that, during the discussions stemming from the Secretary-General’s high-level panel on AI—a prominent advisory body—many terms lacked clear, internationally agreed definitions. This recognition underscores the need for greater global consensus and shared understanding of key concepts in the AI domain. I mean, things like fairness, like safety, like transparency. But obviously, when it comes to transparency and when it comes to explainability, they’re both absolutely critical in building public trust, which we need to do when it comes to AI. And we want to ensure accountability for AI systems and AI applications. So for us, I think when it comes to transparency, it’s about that disclosure when it comes to the how. And we want to make sure we understand how systems are designed. We want to understand how those systems are trained. And also to understand how they’re ultimately deployed. So those are the elements we keep in mind when it comes to the how in respect to transparency. When it comes to explainability, it’s a bit more towards the outcomes. It’s the how and the why AI systems produce specific outcomes. And as I said, both are absolutely critical when it comes to building that trust piece. And we want to make sure, as many speakers have noted in the previous opening session, that AI doesn’t get used for the wrong purposes, that AI doesn’t perpetuate biases, that we avoid potential harm. So we need to make sure that those two key features are built in. From the ITU perspective, we put standards at the core. We think that standards are the cornerstone of responsible development of artificial intelligence. Those standards play a key role when it comes to safety, when it comes to transparency, when it comes to ethical use. And that can also help us ensure that we unlock AI’s full potential. And I guess the last thing I wanted to mention, and it’s a specific example, we have launched a group as part of the World Standards Cooperation, the WSC, so we’re working with partners like IEC, ISO, IEEE, IETF and others. And we’re focusing in that group on multimedia authentication. We’re looking at deepfakes and we’re looking at misinformation. And I think that’s a good example of partnerships, of collaboration to ultimately make a difference. One other piece is the points about transparency and also explainability are also core to a recently adopted resolution that came out of our Standards Conference, where we had our first AI standards summit. Thank you.

Latifa Al-Abdulkarim: Thank you. Thanks very much. And this is really very interesting. And I think it’s totally aligned with what we exactly are looking for in terms of harmonizing those standards and specifically working and building studies to know the metrics that we need for those explainability and transparency for each context or application. That is quite different when we are discussing those two principles. And the most interesting part that those standards and those global efforts are aligned with many national efforts. And specifically, if I want to ask Your Excellency Dr. Abdullah about Saudi Arabia. Saudi Arabia has made significant strides when it comes to promoting the ethical use and development of AI. Could you please share more about the Kingdom efforts and initiatives in advancing AI ethics, transparency and explainability? Thanks.

Dr. Abdullah bin Sharaf Alghamdi: Thank you, Dr. Latifa. First, I would like to welcome my fellow panelists to Riyadh. And it’s a great pleasure to share the stage with such distinguished visionaries and thought leaders. First, let me just talk about the beginning of our journey in Saudi Arabia and the area of AI that started back in 2019 when the Saudi Data and AI Authority was established. So, we placed a strong emphasis on embedding the ethics into the core of all AI initiatives since then. We focused more at the beginning on the AI ethics framework. And basically, Saudi Arabia was among the early countries adopting the UNESCO recommendation on AI ethics. So, a year after that, back in the Global AI Summit, the second one back in 2022, we announced our National AI Ethics Framework. And the beauty of that framework, it was associated with the incentive program that was announced earlier this year. The idea of the program to encourage the governmental entities to register in a platform and also to undergo a number of surveys. And based on their performance, based on their maturity level, they are granted badges. And on this stage, two months ago, we celebrated 20 entities from the public and private sectors and granted them badges. And also, this framework, the National AI Ethics Principles Framework was also recognized as a champion by the ITU WSIS a while ago. So, this signifies our commitment here in Saudi Arabia to align also with the international community in those initiatives. And also, the government has introduced a unique initiative by establishing the International Center for Artificial Intelligence Research and Ethics. And proudly, the UNESCO has recognized the center as a global and regional partner to advance the AI ethics locally and worldwide. Only a few days ago, the UNESCO has published its report on AI in Saudi Arabia that highlighted a number of unique achievements and initiatives on AI ethics. And this is a great achievement of Saudi Arabia, completing the RAM methodology requirements among 10 countries worldwide.

Latifa Al-Abdulkarim: Congratulations, Your Excellency, for all these efforts and incentives that we are doing here in Saudi regarding the ethical use of AI and targeting transparency and explainability in specific. And maybe the most interesting part that we are also considering the cultural aspects and providing context-based AI systems while at the same time following all these ethical guidelines that we are working on here in Saudi Arabia. Talking about culture, it’s very interesting to know and more about you from your side, Dr. Amal El Fallah Seghrouchni, about Morocco. Morocco is a nation bridging the Arabs, African worlds and putting at a crossroads of culture as well as economics and technological exchange. How is the Ministry setting this benchmark, I would say, for ethical AI practices and specifically for transparency and explainability?

Amal El Fallah Seghrouchni: Thank you very much for the question. Yes, Morocco is Arabic-African. We have we are close to America, we are close to Europe, so we are the gate of many, many things. And Morocco is also very well known with inclusion and diversity. This is very challenging for AI today to have multilingualism and multicultural approaches because if you deal with LLM, for example, the most spread technology today in AI field is, for example, chatGPT. I know there is a very interesting experience in Saudi Arabia about LLM. In fact, if we want to be inclusive enough, we should target all the languages over the world. In particular, in Africa we have like 800 dialects across the continent, and we cannot ask everybody to speak English. It’s just something impossible today. I mean, we can speak English as second language, but the native language is not English, and we have to deal with that. In Morocco, for example, we have three languages, in the north, in the middle of the country, and also in the south. They understand each other, but it’s quite different from one region to another one. So how to apply AI in this context? Because language is also the vector for culture. If you don’t speak the language, you cannot understand the culture of the region, of the country, of the continent, et cetera. So in my ministry, we have a department, as I said, which works on how to make models for a multi-language environment. And we face a lot of challenges, for example, some of these languages don’t have structure, don’t have semantics, don’t have basic building blocks to deal with, computationally speaking. So this is one aspect. Now, if we go back to transparency and explainability. Transparency, for me, is like to explain, and not to explain, because I will be confused with explainability, but it’s relays on how the system can meet each expectation, how it functions, et cetera. When it comes to explainability, it’s a bit more technical. We have to justify the decision given by the system. For example, in scoring, in many cases, in justice, in medicine, in health, et cetera, you deal with scoring. The scoring should be justified on technical parameters of the system. You have to justify your decisions. In legal systems, for example, you cannot just provide judgment, you have to explain the judgment in health, et cetera. So in Morocco, as you know, we have been involved in many global, multilateral initiatives towards AI, in particular, with UNESCO and United Nations, and I can go back to all these initiatives if you have room. But the idea is that Morocco is a very, very aware of the necessity, if we want to build towards AI, we need to provide transparency and explainability to citizen and to stakeholders.

Latifa Al-Abdulkarim: Thank you so much, and you highlighted very interesting point, the one that is related to the language, and the importance of considering inclusion even in all the language, and all the data sets in all the languages, that is gonna be very crucial, as well as taking the definition of transparency and explainability, and making sure that transparency is throughout the whole AI life cycle, while explainability is reasoning and justifying the outcome. However, even for that reasoning and justifying for the outcome, we still have some challenges when it comes to trustworthy. We don’t want to provide too detailed answer, and then that will increase the trust maybe for the end users at the end. That’s very interesting discussion. Take me back to… Take me to you, Mr. Li, to wondering and knowing more about how can we leverage those principles of transparency and explainability in AI system to strengthen institutions, governance, and capacity building, specifically for national levels.

Li Junhua: Well, thank you. Thank you, Madam Moderator, for raising this important question. Perhaps at the outset, I just want to say a few words about UN DASA. We are custodian for the Sustainable Development for 2030 Agenda. So for the UN Development System, the ultimate objective is to assist the member states to achieve the 2030 Agenda or Sustainable Development Goals. So in this exercise, we definitely need the regional and the national institutions to work together to accelerate these efforts. So by saying that, we definitely underlined the importance or highlighted importance for this AI technology to stimulate and accelerate the national efforts and the regional efforts. Of course, also at the global level. For instance, last May, we had this ECOSOC special meeting focusing that how AI technology can sustain and stimulate sustainable development. We need to harness the strategy and the synergies together. And then also, why this transparency and explainability is so important in capacity building at the global or national level. First, the General Assembly actually adopted two important or landmark resolutions on AI technology. And among those two resolutions, there are a few important but common elements. Perhaps I could just share with our participants. Number one, they highlighted very much the explainability of the AI for the national efforts. Because to us, explainable AI plays a vital role in developing capacity for demystifying the algorithm. This enables the policymakers to know that whenever the decisions undertaken, it can be explained to the public constituencies. So we can leverage the enthusiasm and participation by our constituencies at the national level and it also enhances the regional networking. And then also, second important element from those two resolutions is that capacity building should go expand or go beyond the technical training to include ethical and regulatory dimensions. So that actually, I don’t need to further explain that. Whenever there’s such a need to use the AI technology, we need to be the very ethical and transparent. Thank you.

Latifa Al-Abdulkarim: Thank you so, so much for mentioning this particular part. We definitely need to work on those capacity building cross domains. It’s not only about technical. Everyone thought that this is a technical, for example, forum. a technical forum. It’s for everyone who should be part of the future, who should help and contribute in shaping the future or the digital future that we want. Dr. Gong Ke, I know that you are leading the Chinese Institute for the new gen AI and I’m sure that you have your inputs and opinions when it comes to how can we leverage AI ethics and capacity building in specific to transparency and explainability in AI. Thank you.

Gong Ke: Thank you. Based on the observation of my Institute in the past years to the Chinese practices, I think there are five essential steps to promote the transparency and responsible deployment of AI system. First, we need to building wide consensus through a multistakeholder dialogue by the institutional approach to engage policymakers, industrial leaders, academia and civil society to develop a shared understanding of transparency and explainability. And based on this, the second step is to provide a clear guideline and setting operational standards for AI transparency and explainability to encourage the development of ethical AI practices through an open science approach as recommended by UNESCO. The third one I’d like to mention is that a building capacity and the literacy for of AI by investing education and training programs for public servants, policymakers and industrial professionals and public to understand AI technology and its social implications so that to enable them to implement the guideline and the standards. And another very important step is to develop technical tools and the methodologies to evaluating and verifying the transparency of AI system. Last but never the least is promoting international collaboration to establish interoperability of norms and the best practices sharing to ensure the alignment with the global standards. I think in this regards, IGF can play a crucial role in this process.

Latifa Al-Abdulkarim: Thank you very much. You mentioned a lot of very interesting points here that has been also part of the GDC adoption and recommendations. The consensus, we need definitely scientific consensus in terms of the definitions so we at least globally can we agreed on certain definitions. The policy dialogue and interoperability and the focus exactly on the main requirements that I believe that we are lacking in globally is the focus on having experts into mainly who could tackle the technical solutions for transparency and explainability in AI. And this is very important and we’d like to work on it and to have more experts in this field. This will help us in finding and reaching trustworthy AI systems. Talking about all these requirements within the capacity building, I know that we are in Saudi Arabia doing our best into ensuring safeguards without limiting AI potential. I would like to hear from you, Your Excellency Dr. Abdullah, about how exactly we are doing this and mainly how we are balancing between the AI governance while we are innovating.

His Excellency Dr. Abdullah bin Sharaf Alghamdi: As you know Dr. Latifa, the AI landscape is evolving rapidly and this evolution includes a lot of opportunities and also it reduces a lot of serious risks. So our approach here in Saudi is based on continuous monitoring of the evolution of AI solutions and also intervening with the right governance tools to make sure that the principles that I talked about are taken into consideration. So the balance is a very serious issue and we have to make sure that the innovation goes along and with the right governance and regulatory tools. For example, recently with the rise of the synthetic data like misinformation, disinformation, we have nationally introduced the deep fake guidelines for the developers and also for the users to be taken into consideration when using or developing such systems. And also, for example, with the evolution and emergence of multiple large language models similar to chatGPT, we have introduced the national AI guidelines framework in order to help the developers choose the right methodology and also to follow certain guidelines in developing these solutions taken into consideration the ethical principles that we talked about. So on the other hand, we have also introduced nationally the national AI adoption framework where we encourage the governmental and private sector organizations to adopt AI and to scale the AI solutions within their sectors. So recently we have celebrated the establishment of 25 AI offices within governmental organizations and those offices will take the care of balancing between innovation as well as the regulation taken into consideration the national AI ethics principles framework and also the framework we have just announced, we have just talked about the gene AI framework and so on and so forth. In addition to that, also we have published the national AI occupational guidelines framework that sets the guidelines for the human resources departments to deal with new jobs, new job titles associated with artificial intelligence. Jobs like the AI engineering, the AI data science, the data science, the data analysis, the AI developers. So we set the guidelines, the performance, the data analysis, the job titles and also the applicants. Also on the other hand, we have introduced the national academic framework for the academic institutions to be used in making sure the curriculum developed or used take into consideration those guidelines and we have introduced eight levels starting from the elementary level, level number one, going through the undergrad, reaching to the PhD level, level number eight. So the idea is for the academic institutions to take these guidelines into consideration when introducing new programs on AI. Last but not least is the is the introduction of the establishment of the International Center for AI Research and Ethics that was accredited by the UNESCO as we mentioned before and I think these initiatives make Saudi Arabia number three worldwide after the US and the UK according to the OECD policy observatory. So this signifies our commitment, dedication and aligning with international community and introducing new rules and regulations for AI.

Latifa Al-Abdulkarim: Thank you so much Your Excellency and well deserved after going through all those frameworks and that is some related to the curriculum and occupational and for the capacity building itself while the others also taking care of how exactly the adoption of AI giving that at this very I think it was announced a few months ago and we have already 225 AI offices and government entities. Congratulations on this achievements. I believe this is really give a clear example of how can we balance between innovation and regulation and of course we need to keep on monitoring of our progress and reflect that on our guidelines. Ms. Doreen, I believe that you have very interesting examples also in balancing giving that you are working on many use cases related to the SDGs and I would like to hear more from you about how can transparent and explainable AI systems those goals. Thank you.

Doreen Bogdan-Martin: Thank you. Maybe, Your Excellency, just to also pick up on the work we’ve done in terms of the AI readiness framework. I think that’s also a great example of how we can work together with countries to help them find ways to leverage artificial intelligence. When it comes to the sustainable development goals, I think it’s important to recognize that only 17% of the targets are on track, so we’re not in a good space in terms of achieving those targets and goals by 2030. But we’re optimistic because we fully believe that leveraging digital technologies, and in particular artificial intelligence, can actually help us to accelerate progress on the 17 SDGs and on the 169 targets. We’ve done some joint work with UNDP, and we showed that if you invest in digital and you invest in AI, you can actually accelerate progress by some 70%. So that’s our big push, is to get all stakeholders to put digital first, put AI first, so that we can make significant progress. In the context of our artificial intelligence for good, AI for good, which we started back in 2017, we have seen very concrete examples and solutions. We need to leverage those solutions. For instance, it was a great story of Mohamedou, who was a winner of our AI innovation factory, he comes from West Africa. He’s been able to take data together with AI, work with farmers, and actually the farmers he has worked with, they’ve seen an increase in their yield by some 200%. So very concrete examples of what we can do when we leverage AI. I think in the UN system, it’s also important to recognize that we do work together, something that the USG has just mentioned. We have an interagency working group on artificial intelligence that ITU co-chairs with UNESCO, and we have documented more than 400 use cases of how we as a system are leveraging AI to achieve the SDGs. So whether it’s something in the space of climate, whether it’s healthcare, whether it’s school connectivity, whether it’s gender, we have demonstrated very clearly how you can use AI to achieve the SDGs, and I think that’s something that we absolutely have to build on. And then when it comes to climate and sustainability, we heard lots of interventions about that this morning, and I think we have to remember in the digital ecosystem, in the digital space, we are emitters of greenhouse gas emissions, and some estimates show that we’re around 4% come from the digital sector. We know that artificial intelligence is hungry for energy, also thirsty for water, but if we use it correctly, artificial intelligence can help reduce greenhouse gas emissions by 10%. And I think also that’s a space where standards is critical. So we’re very focused on the standards component, developing international standards with our partners. We have launched the Green Digital Action Coalition. We had a digitization day at COP 29 where we launched the Green Digital Declaration, had about a thousand or so signatories to that declaration. And we do need to come together to advance sustainable green solutions when it comes to digital and when it comes specifically to artificial intelligence, so that we can be reducers and not emitters. Thank you.

Latifa Al-Abdulkarim: Thank you. Thank you so much, Ms. Doreen, and I’m sure that Mr. Li could elaborate more on this in particular to maybe addressing climate actions under the UN.

Li Junhua: Well, thank you. I’m so glad to hear from Doreen about this SDG implementation. We are off the track, left behind our objectives, but so definitely AI technology could inject a stimulus, a new stimulus, in our efforts. I just want to give you three specific examples how AI technology can help us to leapfrog. First, AI in the real-time data analysis. That helps the policymakers to understand the overall situation, how this 17 goals interlinked together. For instance, like education, how much impact generated from education on the gender equality, and also how much impact on the renewable resource energy impacts on our climate efforts or climate agenda, climate action. And second specific area is that the AI system can address the structural inequalities. For instance, if there would be an urgent situation or a contingent situation, we need to allocate the resources to the disaster reduction or disaster relief. That’s the important thing for the policymakers to make the right judgment on the decision. So that’s where AI can help. In a third area, just now you mentioned the climate action. Well, AI-driven models can do the climate prediction and the resource mobilization. So that is very important for the policymakers and also national efforts. And when they articulate their national efforts, it will be integrated to the global or regional efforts together. Thank you.

Latifa Al-Abdulkarim: Thank you so much for going through all these examples related to connectivity, climate, sustainability, and energy. And very important point that you have just mentioned about when do we need AI to move and take actions when it comes to urgent situations. And this is what we need to prepare ourselves about it from now to get ready for such situations before it happens. I wish that it’s not happened anyway. For you, your excellency, Dr. Amal, we have heard a lot of opportunities, a lot of enormous potentials for AI in different use cases and different national and regional and global level. However, we both know that there are a lot of barriers too. I would like to hear from you about those barriers, whether they are technical or non-technical barriers, and how can we address them? Or if there are solutions already, then how can we elaborate on those solutions?

Amal El Fallah Seghrouchni: Thank you very much. Let me start with non-technical barriers. It’s easy. We have to conduct mindset changing in our countries to make adoption of AI easier, because it’s a huge problem to convince stakeholders to develop AI systems for different reasons. The first one, because we don’t have enough talents, skills in AI, and this is something we should solve. It’s a huge problem over the world. We know there is a technical word to say that, the war of talents, you know. It’s a big problem to solve first. And also, people are afraid from AI because they think AI will dominate the world, AI is more intelligent than human beings, etc. Now, let me talk about the technical problems. I think the first one technical problem is the complexity of the models. As you know, Europe has developed the AI Act until 2020, and then ChargePT came on the table, and the AI Act stopped. It’s just like, you know, something very disruptive happened on the AI landscape, and we had to reconsider all that we have done before. So, like, five years’ work on the AI Act is stopped, and now we think that we will get the new AI Act for 2025, but it’s not sure. And so, this is maybe, we can think that we will have unforeseen situations in AI, and this means that we have to prepare ourselves to change our regulation as quick as possible to follow the technology. And this is not easy because, you know, regulation takes a lot of time compared to developing algorithms or new models. The other thing is that these large language models, for example, deal with millions and sometimes billions of parameters. So, it’s not possible to control what going on in the system by human being. It’s just, and in addition, the system learns, their ways change, and what’s going on in the system is not foreseeable by human being. The second thing is that most of AI systems can be considered as black boxes. We have inputs, we have outputs, we have lots of, lot of things happening within the box and nobody can explain. This is why explainability leads to accountability, etc, etc. So this is also a huge problem. The high dimensionality of data, we have a lot of dimensions to deal with, and we also have hybrid data. You, sometimes you deal with text, with digits, with images, with videos, and so on. And this is also, it’s not linear, and people, human being cannot think when it’s not linear. So most of data, by the way, came from sensors or radars. So this is something also that makes AI system very difficult to predict, the non-linear decision-making, because we focus on correlations, and when we have more than three correlations, we are lost. Sometimes you can go to seven, but you should be very skilled for that. So this is also makes some difficulty in explaining these systems. Data transparency. And about data, I would like to say something, because we think that we need huge data to do systems, to make system function. It’s not true. It’s, you know, like when you put all together from Internet, you have a good data, you have bad data, you have false data, you have whatever. You don’t need all this. You need good data, very well calibrated, and this maybe solve problem of climate change, if I go fast, because you have to set your data set as clean as possible. It’s enough. If you want to talk about justice, you don’t need data about health. If you want to talk about agriculture, you don’t need mining data, and so on. It’s a conversational system, works with all data it would be gathered on Internet, but other systems in different sectors, we don’t need all this data. We need specific and specialized data. A model can also behave unpredictably when deployed in different context. If you put models that work with Arabic, they will not work at the same in other dialect or in other language. So when you change the context, you should make accurate data, and you should change sometimes deeply the data you use.

Latifa Al-Abdulkarim: I totally agree with you and I’m sure that Dr. Gong has a lot to exchange with us, giving his expertise within the GenAI, and I’m sure within the use cases that you are dealing with in the Institute, please.

Gong Ke: In view of the limited time, let me focus on the technical barriers. It’s just mentioned by our colleague from Morocco. The technical barriers, many lines from the complexity of the AI model, and also from the data privacy, it raises a further challenge to the transparency of the models. So to address these models, I think it is, among many others, to further encourage and promote a technical innovation is a must. For example, we need to advance the AI model from today’s pure data-driven model to a new model which will be jointly driven by data and knowledge, in terms of knowledge of graph, decision-making trees, and many others. And also, we need to adopt and further develop privacy-preserving technologies, like differential privacy, federated learning, and homomorphic encryption to protect sensitive data while enabling transparency. I think further technical innovation in a possible and ethical way is a must.

Latifa Al-Abdulkarim: Thank you. Thanks so much for mentioning all this. We have heard about the complexity of the model, the complexity of the data, and the complexity of regulations, and how much we need flexible regulations. And this is maybe the call for, again, support even for the AI Act, as it’s now used as a sandbox for monitoring and evolving and changing, amending the current regulations. And we need to support, again, I totally agree with you about how having more skills for responsible AI that works on the technical side, to have more solutions, I would say technical solutions for responsible AI, including the privacy-preserving technologies. I’m looking at the time, I just want to make sure that we have at least preserving some time, because I want to make sure that we are not closing this session without knowing what are the actions. We have discussed the potentials, the challenges, but what can we provide for the IGF in 2025 and beyond. I would start with you, Your Excellency, Dr. Abdullah.

His Excellency Dr. Abdullah bin Sharaf Alghamdi: We started this idea back in 2019, and that time we sought support from other countries. You remember we paid some visits to our friends in Estonia, and also in South Korea, to benefit from their experiences either in data governance, or in data centers, or in also AI as well. So after five years of experience, I think Saudi Arabia now stands ready to share its expertise with other countries as well. I remember back in the first Global AI Summit back in 2020, we hosted the consultation session for establishing a UN AI advisory body for the General Secretary. So we hosted that consultation sessions during the pandemic, and a few years later, in 2023, the UN Secretary General announced the establishment and the launch of the advisory body, and you being a very active member of this body, Dr. Latifa. Also, in the second Global AI Summit, we announced a number of collaborations with international communities, with the ITU, we worked together, and we also launched the AI readiness framework, and thanks for the ITU for being steadfast in this partnership. And also with the OECD, we also announced a partnership to enhance the AI policy and incidents observatory, and this was also announced during the GAIN24. As well as, we worked with the international, with OECD also, we worked with them to establish GenAI Center of Excellence here in Riyadh, in order to help the member countries develop AI-based solutions, and also to take into consideration the ethics framework as well. Also, we worked with the ICESCO to announce in this very stage, two months ago, the Riyadh Charter for the Islamic World. As you know, Saudi Arabia is the heart of the Muslim world, more than two billion Muslims looking at Saudi and their practices in AI and large language model, Arabic large language model. So we have established, we have launched the Riyadh Charter with the ICESCO. Also, under the umbrella of the International Center for AI Research and Ethics, ICARE, we organized a number of workshops with the GCC countries, with the Arab Leagues, in order to increase the awareness towards the UNESCO RAM, the Readiness Assessment Methodology. And also, as I said before, Saudi Arabia was amongst the early countries adopting this methodology, on implementing the methodology. And as I said, we are proud, really, to be number one regionally according to the global AI index and also number one globally in the AI government strategy according to the same index. Going forward, our priorities for the year to come, year 2025, I recommend to minimize the declarations and focus more on actions, this first thing. And I think we need to focus on three main points in order to overcome the gap between governance and innovation. We have to focus on trust, we have to focus on accountability and also, sorry, safety and accountability as well, and also we have to focus on collaboration. So trust is based basically on, as the esteemed members mentioned, is based on clear governance for the explainability, the transparency, and for the safety, we have to make sure that we have the proper proactive measures and also we need to make sure that we have the proper guidelines in order to make sure that we implement the safety measures and also to mitigate the risks associated with AI products. Collaboration is essential between the governmental entities, industries, and also the academic institutions to make sure that they share the same goals. And these priorities, Saudi Arabia will be positioned as a global leader to develop AI-based solutions for the benefit of humanity.

Latifa Al-Abdulkarim: Thank you so much, Your Excellency, and of course we will keep on exchanging our expertise with the global and aligning with global initiatives. Dr. Amal, from your perspectives, what steps or empirical methodologies maybe should be prioritized in 2025 to bridge the gap and accelerate the use of transparency and explainability into AI systems?

Amal El Fallah Seghrouchni: I would like to relay on His Excellency’s diagram, I found it very accurate for this question. So he talked about algorithm, computing, and data, and I would like to build on this. For example, for computing, we would like, I mean, I think we should do more with less. With data, I would like to push for data protection and data calibration. I can explain each concept separately. And for the algorithms, I will connect algorithms to models, it’s mainly the same, I mean, models and algorithms. And I would like to go for trustworthy, and inclusive AI. To do that, we need, of course, governance is very important, regulation, but the objectives of all this together is to get this inclusion in AI and to be as economical as possible towards our environment with frugal AI. This means that we do not have to use huge data for nothing, or build very big models for nothing. Or, I mean, we should customize our algorithms, our models, and our data sets to do more with less. This is my recommendation.

Latifa Al-Abdulkarim: Thank you so much. Always we are calling for low compute model for saving energy, and this is also targeted with your suggestions, I would say, and priorities for actions. Mr. Li, please, considering the adoption of global digital compact, the rapid advancement of AI, what steps or specific actions should we prioritize to harness AI’s potential for sustainable development and inclusive growth in this transformative era?

Li Junhua: Thank you. From UN’s perspective, I would like to flag out three or four key areas. First, just as His Excellency highlighted, number one, we need to emphasize on the global collaboration among all stakeholders, because the collaboration and cooperation among all stakeholders would be key for the digital transformation. Second, I would argue that the key areas to utilize this IGF platform, as all the distinguished speakers this morning highlighted again, this is a primary, open, and inclusive platform, so we need to dig the potential of this IGF. And third area, I would like to argue, is to allocate additional efforts to support the capacity building in the global south, especially at the local community levels, because without the open access by them, it’s hard to imagine that we can benefit for everyone. Last but not least, as the Minister has argued, we need to uphold a very responsible use of the data. I don’t need to further elaborate it. Thank you.

Latifa Al-Abdulkarim: Thank you so much. Ms. Doreen?

Doreen Bogdan-Martin: Thank you. Perhaps to pick up on last point on humanity, because I think at the end it’s all about humanity. It’s all about the betterment of humanity. I had the honor and privilege of meeting Pope Francis a couple of weeks ago, and we spoke about technology and humanity, and he reminded us that artificial intelligence doesn’t just need a brain, it needs a heart. It needs empathy. So let’s remember that. And of course, when we think about our digital world, what does it mean when we still have a third of humanity that is not yet connected? The Secretary General often reminds us that we have to make sure AI does not stand for advancing inequalities. So when it comes to what to prioritize, I think we really have to prioritize closing the gap, closing the digital gap, closing the AI gap. As the Minister said this morning, that gap is a compute gap, we have a data gap, we have an algorithmic gap, and as you just said, we have a capacity building gap. We’ve got to close those gaps if it’s going to benefit humanity. We also need to be focusing on standards. We need to focus on responsible standards for AI. And then I guess the last point is about governance. We need to have more inclusive governance discussions, like here at the IGF, at the WSIS Forum, at AI for Good. We need all stakeholders at the table to discuss governance that benefits all of humanity. Thank you.

Latifa Al-Abdulkarim: Thank you so much. Dr. Gong?

Gong Ke: So let me raise two points. Firstly, I’d like to echo what Undersecretary Lee has mentioned, capacity building. Capacity building is so important for the further deployment and application of AI in an inclusive and responsible way. The capacity divide is behind the divide of data, the computer, and the algorithm. Here I’d like to highlight the engineering capacity. Engineering capacity is so important. The World Federation of Engineering Organizations, with the support of UNDESA, UNESCO, and many other United Nations organizations, is carrying out a 10-year-long engineering capacity building for Africa program. We need your support. And secondly, I’d like to mention the combination of digitalization and sustainable development to make a dual transformation or twin transformation of sustainability and digitalization. So in China, we say to move AI from chat to product to benefit people and to achieve double increases and double decreases. The double increases is to increase the quality of the production and to increase the efficiency of the production. The double decreases is to decrease carbon footprint and to decrease the cost. So I stop here. Thank you.

Latifa Al-Abdulkarim: Thank you so much. I think this is the best words to close our discussion today. And for our audience, please take these actions for the next IGF to build safe AI systems and secure human-centric, which is going to be a solution for most of the issues that we are discussing here, a human-centric digital future, and leave no one behind. And don’t forget, AI has heart too. Thank you so much. Ladies and gentlemen, we now invite you to enjoy a delightful lunch break. Please remember to return here in 90 minutes as we look forward to resuming the program promptly.

D

Doreen Bogdan Martin

Speech speed

140 words per minute

Speech length

1272 words

Speech time

542 seconds

Standards are key for responsible AI development

Explanation

Bogdan Martin emphasizes the importance of standards in the responsible development of AI. She states that standards play a crucial role in ensuring safety, transparency, and ethical use of AI.

Evidence

ITU has launched a group as part of the World Standards Cooperation focusing on multimedia authentication, deepfakes, and misinformation.

Major Discussion Point

Defining and Promoting Transparency and Explainability in AI

Transparency relates to system design, explainability to outcomes

Explanation

Bogdan Martin differentiates between transparency and explainability in AI. She explains that transparency is about disclosing how systems are designed, trained, and deployed, while explainability focuses on how and why AI systems produce specific outcomes.

Major Discussion Point

Defining and Promoting Transparency and Explainability in AI

Agreed with

Abdulah Bin Sharaf Alghamdi

Amal El Fallah Seghrouchni

Agreed on

Importance of transparency and explainability in AI

AI can accelerate progress on SDGs by 70%

Explanation

Bogdan Martin highlights the potential of AI to accelerate progress on the Sustainable Development Goals. She states that leveraging digital technologies, particularly AI, can significantly speed up progress on the 17 SDGs and 169 targets.

Evidence

Joint work with UNDP showed that investing in digital and AI can accelerate progress by 70%.

Major Discussion Point

Leveraging AI for Sustainable Development Goals

Agreed with

Li Junhua

Agreed on

AI’s potential to accelerate progress on Sustainable Development Goals

A

Abdulah Bin Sharaf Alghamdi

Speech speed

108 words per minute

Speech length

1524 words

Speech time

840 seconds

Saudi Arabia has developed national AI ethics frameworks and initiatives

Explanation

Alghamdi outlines Saudi Arabia’s efforts in promoting ethical use and development of AI. He describes various national frameworks and initiatives implemented to ensure responsible AI development and adoption.

Evidence

Saudi Arabia adopted the UNESCO recommendation on AI ethics, announced a National AI Ethics Framework, and established the International Center for Artificial Intelligence Research and Ethics.

Major Discussion Point

Defining and Promoting Transparency and Explainability in AI

Agreed with

Doreen Bogdan Martin

Amal El Fallah Seghrouchni

Agreed on

Importance of transparency and explainability in AI

Differed with

Amal El Fallah Seghrouchni

Differed on

Approach to AI regulation

Focus on trust, safety, accountability and collaboration

Explanation

Alghamdi emphasizes the need to prioritize trust, safety, accountability, and collaboration in AI governance. He suggests focusing on these aspects to bridge the gap between governance and innovation in AI.

Major Discussion Point

Priorities for Future AI Governance

A

Amal El Fallah Seghrouchni

Speech speed

117 words per minute

Speech length

1448 words

Speech time

739 seconds

Language diversity poses challenges for inclusive AI development

Explanation

Seghrouchni highlights the challenges posed by language diversity in developing inclusive AI systems. She emphasizes the importance of considering multiple languages and dialects in AI development to ensure inclusivity.

Evidence

Morocco has three languages related to Amazigh in different regions, which poses challenges for AI application.

Major Discussion Point

Defining and Promoting Transparency and Explainability in AI

Lack of AI talent and skills is a major barrier

Explanation

Seghrouchni identifies the shortage of AI talent and skills as a significant barrier to AI implementation. She emphasizes the need to address this skills gap to facilitate AI adoption.

Major Discussion Point

Challenges and Barriers to AI Implementation

Complexity of AI models makes them difficult to explain

Explanation

Seghrouchni points out that the complexity of AI models, particularly large language models, makes them difficult to explain. She notes that the high number of parameters and the black-box nature of many AI systems pose challenges for transparency and explainability.

Evidence

Large language models like ChatGPT deal with billions of parameters, making it impossible for humans to control or fully understand what’s happening in the system.

Major Discussion Point

Challenges and Barriers to AI Implementation

Agreed with

Doreen Bogdan Martin

Abdulah Bin Sharaf Alghamdi

Agreed on

Importance of transparency and explainability in AI

Regulations struggle to keep pace with rapid AI advancements

Explanation

Seghrouchni highlights the challenge of regulations keeping up with the rapid advancements in AI technology. She notes that the development of AI regulations takes much longer than the creation of new algorithms or models.

Evidence

The European AI Act development was disrupted by the emergence of ChatGPT, causing a delay in its finalization.

Major Discussion Point

Challenges and Barriers to AI Implementation

Differed with

Abdulah Bin Sharaf Alghamdi

Differed on

Approach to AI regulation

Develop frugal, trustworthy and inclusive AI

Explanation

Seghrouchni advocates for the development of AI that is frugal, trustworthy, and inclusive. She emphasizes the need to customize algorithms, models, and data sets to do more with less, while ensuring inclusivity and trust.

Major Discussion Point

Priorities for Future AI Governance

L

Li Junhua

Speech speed

106 words per minute

Speech length

734 words

Speech time

411 seconds

AI enables real-time data analysis for policymaking

Explanation

Li highlights the potential of AI in real-time data analysis for policymaking. He explains that AI can help policymakers understand the interrelationships between different Sustainable Development Goals.

Major Discussion Point

Leveraging AI for Sustainable Development Goals

Agreed with

Doreen Bogdan Martin

Agreed on

AI’s potential to accelerate progress on Sustainable Development Goals

AI can address structural inequalities and aid disaster response

Explanation

Li points out that AI systems can help address structural inequalities and improve disaster response. He emphasizes AI’s potential in resource allocation during urgent or contingent situations.

Major Discussion Point

Leveraging AI for Sustainable Development Goals

Agreed with

Doreen Bogdan Martin

Agreed on

AI’s potential to accelerate progress on Sustainable Development Goals

AI models can help with climate prediction and resource mobilization

Explanation

Li discusses the potential of AI-driven models in climate prediction and resource mobilization. He highlights the importance of these capabilities for policymakers in articulating national efforts and integrating them into global or regional initiatives.

Major Discussion Point

Leveraging AI for Sustainable Development Goals

Agreed with

Doreen Bogdan Martin

Agreed on

AI’s potential to accelerate progress on Sustainable Development Goals

Emphasize global collaboration among all stakeholders

Explanation

Li stresses the importance of global collaboration among all stakeholders in harnessing AI’s potential. He argues that cooperation among various stakeholders is key for digital transformation.

Major Discussion Point

Priorities for Future AI Governance

G

Gong Ke

Speech speed

97 words per minute

Speech length

558 words

Speech time

345 seconds

China is taking steps to promote responsible AI deployment

Explanation

Gong outlines steps China is taking to promote responsible AI deployment. He emphasizes the importance of building consensus, providing clear guidelines, and developing capacity for AI literacy.

Evidence

China is engaging in multistakeholder dialogues, providing authoritative guidelines, investing in education and training programs, and promoting international collaboration.

Major Discussion Point

Defining and Promoting Transparency and Explainability in AI

Data privacy concerns create challenges for transparency

Explanation

Gong highlights that data privacy concerns pose challenges for AI transparency. He suggests that privacy-preserving technologies need to be developed and adopted to address this issue.

Evidence

Gong mentions technologies like differential privacy, federated learning, and homomorphic encryption as potential solutions.

Major Discussion Point

Challenges and Barriers to AI Implementation

Build engineering capacity, especially in developing regions

Explanation

Gong emphasizes the importance of building engineering capacity, particularly in developing regions. He highlights this as a crucial step for the responsible deployment and application of AI.

Evidence

The World Federation of Engineering Organizations is carrying out a 10-year-long engineering capacity building program for Africa.

Major Discussion Point

Priorities for Future AI Governance

Agreements

Agreement Points

Importance of transparency and explainability in AI

Doreen Bogdan Martin

Abdulah Bin Sharaf Alghamdi

Amal El Fallah Seghrouchni

Transparency relates to system design, explainability to outcomes

Saudi Arabia has developed national AI ethics frameworks and initiatives

Complexity of AI models makes them difficult to explain

The speakers agree on the critical importance of transparency and explainability in AI systems, emphasizing the need for clear guidelines and frameworks to ensure responsible AI development and use.

AI’s potential to accelerate progress on Sustainable Development Goals

Doreen Bogdan Martin

Li Junhua

AI can accelerate progress on SDGs by 70%

AI enables real-time data analysis for policymaking

AI can address structural inequalities and aid disaster response

AI models can help with climate prediction and resource mobilization

Both speakers highlight the significant potential of AI in accelerating progress towards the Sustainable Development Goals, particularly through improved data analysis and decision-making capabilities.

Similar Viewpoints

These speakers emphasize the need for responsible AI development that prioritizes trust, safety, and inclusivity, while also promoting collaboration and clear guidelines.

Abdulah Bin Sharaf Alghamdi

Amal El Fallah Seghrouchni

Gong Ke

Focus on trust, safety, accountability and collaboration

Develop frugal, trustworthy and inclusive AI

China is taking steps to promote responsible AI deployment

Unexpected Consensus

Challenges in AI regulation keeping pace with technological advancements

Amal El Fallah Seghrouchni

Abdulah Bin Sharaf Alghamdi

Regulations struggle to keep pace with rapid AI advancements

Saudi Arabia has developed national AI ethics frameworks and initiatives

Despite coming from different regional perspectives, both speakers recognize the challenge of developing regulations that can keep up with the rapid pace of AI advancements, highlighting a shared concern across different governance approaches.

Overall Assessment

Summary

The speakers generally agree on the importance of transparency, explainability, and responsible development of AI, as well as its potential to accelerate progress on sustainable development goals. There is also consensus on the need for capacity building, particularly in developing regions, and the challenges posed by the rapid advancement of AI technology in relation to regulation and governance.

Consensus level

There is a high level of consensus among the speakers on the main issues discussed. This strong agreement suggests a shared understanding of the challenges and opportunities presented by AI across different regions and perspectives, which could facilitate international cooperation in developing governance frameworks and standards for AI. However, the specific approaches to addressing these challenges may vary based on regional contexts and priorities.

Differences

Different Viewpoints

Approach to AI regulation

Abdulah Bin Sharaf Alghamdi

Amal El Fallah Seghrouchni

Saudi Arabia has developed national AI ethics frameworks and initiatives

Regulations struggle to keep pace with rapid AI advancements

While Alghamdi emphasizes Saudi Arabia’s proactive approach in developing AI ethics frameworks, Seghrouchni highlights the challenges of regulations keeping up with rapid AI advancements, suggesting different perspectives on the effectiveness of current regulatory approaches.

Unexpected Differences

Focus on data quantity vs. quality

Doreen Bogdan Martin

Amal El Fallah Seghrouchni

AI can accelerate progress on SDGs by 70%

Develop frugal, trustworthy and inclusive AI

While Bogdan Martin emphasizes the potential of AI to accelerate progress on SDGs, implying the use of extensive data, Seghrouchni unexpectedly argues for a more frugal approach, suggesting that we don’t need huge amounts of data but rather well-calibrated, specific data sets. This difference in perspective on data usage was not explicitly anticipated in the discussion.

Overall Assessment

summary

The main areas of disagreement revolve around regulatory approaches, the balance between innovation and governance, and the approach to data usage in AI development.

difference_level

The level of disagreement among the speakers is moderate. While there are differing perspectives on specific issues, there is a general consensus on the importance of responsible AI development and the need for transparency and explainability. These differences in approach could lead to varied strategies in AI governance and implementation across different regions, potentially impacting global coordination efforts.

Partial Agreements

Partial Agreements

Both speakers agree on the need for responsible AI development, but they differ in their approaches. Bogdan Martin emphasizes the importance of standards, while Seghrouchni advocates for frugal, trustworthy, and inclusive AI development.

Doreen Bogdan Martin

Amal El Fallah Seghrouchni

Standards are key for responsible AI development

Develop frugal, trustworthy and inclusive AI

Similar Viewpoints

These speakers emphasize the need for responsible AI development that prioritizes trust, safety, and inclusivity, while also promoting collaboration and clear guidelines.

Abdulah Bin Sharaf Alghamdi

Amal El Fallah Seghrouchni

Gong Ke

Focus on trust, safety, accountability and collaboration

Develop frugal, trustworthy and inclusive AI

China is taking steps to promote responsible AI deployment

Takeaways

Key Takeaways

Transparency and explainability are critical for building public trust in AI systems

Standards and ethical frameworks are essential for responsible AI development

AI has significant potential to accelerate progress on Sustainable Development Goals

Challenges remain in AI implementation, including model complexity, data privacy, and regulatory gaps

Future AI governance should prioritize trust, safety, accountability, and global collaboration

Resolutions and Action Items

Develop more inclusive governance discussions involving all stakeholders

Focus on closing digital and AI gaps, especially in developing regions

Promote capacity building, particularly engineering capacity

Advance technical innovation in privacy-preserving technologies and explainable AI models

Encourage the development of frugal, trustworthy, and inclusive AI systems

Unresolved Issues

How to effectively balance innovation with regulation in rapidly evolving AI landscape

Addressing the global shortage of AI talent and skills

Developing universally agreed definitions for key AI ethics terms

Ensuring AI benefits all of humanity without exacerbating inequalities

Suggested Compromises

Develop flexible, adaptive regulations that can keep pace with AI advancements

Customize AI models and datasets to specific contexts to reduce computational requirements

Balance comprehensive data collection with privacy concerns through targeted, specialized datasets

Thought Provoking Comments

Transparency, for me, is like to explain, and not to explain, because I will be confused with explainability, but it’s relays on how the system can meet each expectation, how it functions, et cetera. When it comes to explainability, it’s a bit more technical. We have to justify the decision given by the system.

speaker

Amal El Fallah Seghrouchni

reason

This comment provides a clear distinction between transparency and explainability in AI, which are often conflated. It highlights the nuanced differences in how these concepts apply to AI systems.

impact

This clarification set the tone for more precise discussions about transparency and explainability throughout the rest of the conversation. Other speakers referred back to this distinction in their comments.

We have launched a group as part of the World Standards Cooperation, the WSC, so we’re working with partners like IEC, ISO, IEEE, IETF and others. And we’re focusing in that group on multimedia authentication. We’re looking at deepfakes and we’re looking at misinformation.

speaker

Doreen Bogdan Martin

reason

This comment introduces concrete actions being taken to address pressing issues in AI, specifically around deepfakes and misinformation. It shows how international cooperation is being leveraged to tackle these challenges.

impact

This example of practical collaboration shifted the discussion towards more action-oriented approaches and inspired other speakers to share their own initiatives and partnerships.

In Morocco, for example, we have three languages related to Amazigh, in the north, in the middle of the country, and also in the south. They understand each other, but it’s quite different from one region to another one. So how to apply AI in this context?

speaker

Amal El Fallah Seghrouchni

reason

This comment brings attention to the challenges of applying AI in multilingual and multicultural contexts, highlighting an often overlooked aspect of AI development and deployment.

impact

This insight broadened the discussion to include cultural and linguistic considerations in AI development, leading to further comments on inclusivity and the need for diverse data sets.

We need to leverage those solutions, whether it’s the visually impaired girl from India, Jayatri, who gained her independence by having access to AI glasses. It was a great story. Mohamedou, who was a winner of our AI innovation factory, he comes from West Africa. He’s been able to take data together with AI, work with farmers, and actually the farmers he has worked with, they’ve seen an increase in their yield by some 200%.

speaker

Doreen Bogdan Martin

reason

This comment provides concrete examples of how AI can positively impact individuals and communities, particularly in developing regions. It illustrates the practical benefits of AI beyond theoretical discussions.

impact

These real-world examples shifted the conversation towards the tangible impacts of AI on sustainable development and inspired further discussion on how AI can be leveraged for social good.

We think that we need huge data to do systems, to make system function. It’s not true. It’s, you know, like when you put all together from Internet, you have a good data, you have bad data, you have false data, you have whatever. You don’t need all this. You need good data, very well calibrated, and this maybe solve problem of climate change, if I go fast, because you have to set your data set as clean as possible.

speaker

Amal El Fallah Seghrouchni

reason

This comment challenges the common assumption that more data is always better for AI systems. It emphasizes the importance of data quality over quantity, which is a crucial consideration in AI development.

impact

This insight led to further discussion about responsible data practices and the need for focused, high-quality datasets rather than indiscriminate data collection.

Overall Assessment

These key comments shaped the discussion by broadening its scope beyond technical aspects to include cultural, linguistic, and ethical considerations in AI development and deployment. They highlighted the importance of international collaboration, the need for practical applications of AI for social good, and the significance of responsible data practices. The discussion evolved from theoretical concepts to more concrete examples and action-oriented approaches, emphasizing the real-world impacts of AI on sustainable development and the importance of inclusivity in AI systems.

Follow-up Questions

Is the Turing test for AI still valid today, or do we need a new version to check whether we have trustworthy AI systems?

speaker

Latifa Al Abdulkarim

explanation

This question addresses the evolving nature of AI and the need to reassess our methods for evaluating AI trustworthiness.

How can we develop metrics for explainability and transparency for each context or application of AI?

speaker

Latifa Al Abdulkarim

explanation

This highlights the need for context-specific measures of AI transparency and explainability.

How can we address the challenge of AI systems behaving unpredictably when deployed in different contexts or languages?

speaker

Amal El Fallah Seghrouchni

explanation

This question points to the need for research on making AI systems more adaptable and reliable across different cultural and linguistic contexts.

How can we advance AI models from pure data-driven to jointly driven by data and knowledge?

speaker

Gong Ke

explanation

This suggests a need for research into integrating knowledge graphs and decision-making trees into AI models to improve their performance and explainability.

How can we further develop and implement privacy-preserving technologies like differential privacy, federated learning, and homomorphic encryption in AI systems?

speaker

Gong Ke

explanation

This area of research is crucial for balancing transparency with data privacy in AI systems.

How can we develop more frugal, trustworthy, and inclusive AI systems?

speaker

Amal El Fallah Seghrouchni

explanation

This research area focuses on creating AI systems that are more efficient, reliable, and accessible to a wider range of users.

How can we better support capacity building for AI in the Global South, especially at local community levels?

speaker

Li Junhua

explanation

This research area is important for ensuring equitable access to AI technologies and benefits across different regions and communities.

How can we close the gaps in compute, data, algorithms, and capacity building in AI?

speaker

Doreen Bogdan Martin

explanation

This research area is crucial for addressing inequalities in AI development and deployment globally.

How can we develop responsible standards for AI that benefit all of humanity?

speaker

Doreen Bogdan Martin

explanation

This research area is important for ensuring that AI development aligns with ethical principles and societal values.

How can we combine digitalization and sustainable development to achieve ‘double increases’ in production quality and efficiency, and ‘double decreases’ in carbon footprint and cost?

speaker

Gong Ke

explanation

This research area focuses on leveraging AI for both economic and environmental sustainability.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #270 Understanding digital exclusion in AI era

WS #270 Understanding digital exclusion in AI era

Session at a Glance

Summary

This discussion focused on digital exclusion in the era of AI, particularly in relation to marginalized groups and developing countries. Panelists explored challenges and potential solutions for bridging the digital divide between the Global North and South. Key issues identified included lack of infrastructure, limited internet access, language barriers, and inadequate digital literacy, especially in rural areas.

Speakers emphasized the need for human-centered approaches in AI development, involving local communities in design processes. They stressed the importance of creating content in local languages and investing in education and capacity building programs. Success stories were shared, including youth-led initiatives to improve digital literacy in schools and communities.

The discussion highlighted the lack of AI policies and regulations in many countries, calling for international collaboration to establish universal guidelines. Panelists also addressed the challenge of including both youth and older populations in AI adoption. The potential of AI to support sustainable development goals was discussed, though concerns were raised about the risk of data-poor languages being left behind in AI development.

Participants agreed that multi-stakeholder collaboration, including governments, private sector, and civil society, is crucial for addressing digital exclusion. Key factors identified for ensuring digital inclusion in AI included education, public awareness, capacity building, and human-centered approaches. The discussion underscored the urgency of taking action to prevent further widening of the digital divide as AI technologies advance.

Keypoints

Major discussion points:

– Digital exclusion and the AI divide between Global North and South

– Challenges of AI accessibility in rural communities and for marginalized groups

– Need for inclusive, human-centered design of AI tools and policies

– Importance of education, capacity building, and digital literacy

– Role of youth and international collaboration in shaping AI policies

Overall purpose:

The goal of this discussion was to explore digital exclusion in relation to emerging technologies, particularly AI, and discuss ways to make AI more inclusive and accessible, especially for underserved communities and the Global South.

Tone:

The tone was largely constructive and solution-oriented. Speakers acknowledged significant challenges but focused on sharing ideas, success stories, and recommendations for improving AI inclusivity. There was a sense of urgency but also optimism about the potential for positive change if the right steps are taken. The tone became more interactive and collaborative when audience members joined the discussion near the end.

Speakers

– Moderator: Facilitator of the discussion

– Maxwell Beganim, African coordinator for Anglophone region for Open Knowledge Foundation Network, former steering committee member for IGF and Youth IGF in Ghana, former executive member of Internet Society Ghana chapter

– Jaewon Son, Doctoral Researcher at Karlsruhe Institute of Technology

– Speaker 3: Doctor from Chad

– Bendjedid Rachad Sanoussi: Technical expert

– Speaker 4: Maxwell, researcher on AI

Additional speakers:

– Florent: Professor of law at the University of Zurich

– Ram Mohan: From Identity Digital and Critical Infrastructure company

– Mbongi Nimsimangasori: Postdoctoral researcher with the Johannesburg Institute for Advanced Study in South Africa

Full session report

Digital Exclusion in the Era of AI: Bridging the Divide

This discussion focused on the critical issue of digital exclusion in the era of artificial intelligence (AI), with particular emphasis on its impact on marginalised groups and developing countries. The panel explored the challenges and potential solutions for bridging the digital divide between the Global North and South, highlighting the complex interplay of technological, social, and policy factors that contribute to this growing disparity.

Key Challenges

The speakers identified several key challenges contributing to digital exclusion:

1. Infrastructure and Access: There is a significant lack of affordable and reliable internet infrastructure in many areas, particularly in rural regions of developing countries. This fundamental gap in connectivity forms the basis of digital exclusion.

2. Language Barriers: The dominance of a few major languages in digital content and AI development poses a significant barrier to inclusion. As noted by the doctor from Chad, explaining concepts like artificial intelligence in local languages can be challenging, highlighting the need for localised content.

3. Digital Literacy: Limited digital literacy, especially in rural areas and among older populations, hinders the adoption and effective use of AI technologies.

4. Policy Gaps: Many countries, particularly in the Global South, lack comprehensive AI policies and regulations, creating uncertainty and potential risks in AI development and deployment. Jawan specifically highlighted the lack of AI regulations in universities.

5. Data Inequality: Ram Mohan emphasized the growing divide between “data-rich” and “data-poor” languages, which could lead to the marginalisation of minority languages in AI development.

Proposed Solutions and Approaches

The discussion yielded several potential solutions and approaches to address these challenges:

1. Inclusive AI Development: There was strong agreement among speakers on the need for inclusive, human-centred approaches in AI development. Bendjedid Rachad Sanoussi emphasised the importance of putting humans at the centre of AI design to ensure respect for human rights. This approach involves engaging local communities and end-users in the design process of AI systems.

2. Localisation and Language Development: Speakers stressed the crucial role of creating content in local languages to improve digital literacy and make AI more accessible. This includes developing AI tools and interfaces in indigenous languages. The moderator highlighted the need for multilingualism in AI, citing Tanzania’s linguistic diversity as an example.

3. Education and Capacity Building: Investing in education and digital literacy programmes was seen as essential, particularly in rural areas. Maxwell shared success stories of youth-led initiatives like K-Works for Schools, which have helped bridge digital gaps by providing computer labs, internet access, and digital skills training to students in Zimbabwe.

4. Infrastructure Development: Speakers emphasised the need for technology transfer and infrastructure development from the Global North to the South. Rashad suggested using community networks and low-cost satellite technologies to improve internet access, as well as leveraging public-private partnerships to expand infrastructure.

5. Multi-stakeholder Collaboration: There was consensus on the importance of collaboration between governments, the private sector, and civil society in addressing digital exclusion. This collaborative approach was seen as crucial for developing effective AI policies and governance frameworks.

6. Youth Involvement: The discussion highlighted the importance of including young people in shaping AI policies and tools, recognising their role as both users and future leaders in the field.

7. Promoting Open-Source AI: Rashad suggested promoting open-source AI platforms to increase accessibility and foster innovation.

8. Environmentally Friendly AI: Rashad emphasized the need for AI tools to use less energy and be more environmentally friendly.

Specific AI Initiatives

Several specific AI initiatives were mentioned during the discussion:

1. Drone Tech Project in Chad: A project using drones for medical deliveries in remote areas.

2. AI Translation Tool in Benin: An initiative to develop AI-powered translation for local languages.

3. K-Works for Schools in Zimbabwe: A youth-led project providing computer labs and digital skills training to students.

Areas of Agreement and Disagreement

While there was broad consensus on the importance of addressing digital exclusion, speakers emphasised different primary factors and approaches:

– Jawan focused on the need for technology transfer and universal guidelines for AI use.

– The doctor from Chad stressed the importance of developing local language content for digital literacy and creating safe online spaces for internet users.

– Bendjedid Rachad Sanoussi highlighted the lack of affordable infrastructure and the need for human-centred design in AI, as well as the importance of affordable and energy-efficient devices.

– Maxwell emphasised the success of youth-led initiatives in bridging digital gaps.

Unresolved Issues and Future Considerations

Several important questions remained unresolved and warrant further discussion:

1. The balance between waiting for government policy on AI and allowing industry to lead development.

2. Strategies for preserving and developing AI for minority languages with small speaker populations.

3. Ensuring older populations are not left behind in AI adoption.

4. Addressing the growing divide between data-rich and data-poor languages in AI development.

5. Developing Afrocentric AI tools, as suggested by an audience member from Zimbabwe.

6. Investing in public aspects of development, as emphasized by the doctor from Chad.

Conclusion

The discussion underscored the urgency of taking action to prevent further widening of the digital divide as AI technologies advance. It highlighted the need for a multi-pronged approach that addresses infrastructure, education, policy, and inclusive design. By fostering collaboration between diverse stakeholders and prioritising human-centred approaches, there is potential to harness AI as a tool for sustainable development and social inclusion. However, significant challenges remain, particularly in bridging linguistic and cultural divides and ensuring equitable access to AI technologies across different regions and populations.

Session Transcript

Speaker 1: My name is Maxwell Biganim and I am in Ghana. I currently serve as the African coordinator for Anglophone region for Open Knowledge Foundation Network. I also used to be a steering committee member for IGF, Youth IGF in Ghana, and also an executive member of the Internet Society Ghana chapter as well. I am very happy to be here this morning. Thank you.

Moderator: Thank you very much everyone. So before we move further to our discussions, can you hear me? I hope you can, okay. Yeah, I can’t hear myself. So before we move further into our discussion, so we all, we have always recently have been in these discussions about emerging technologies. We have all this AI, the computational, all these technologies and all that, but then we have been facing an issue that the marginalized group and let’s say people also from the global south have left out when it comes to all these emerging technologies. So there is that this divide that is currently existing, especially when it comes to women, people with disabilities, but also people who are living in rural areas. So in this session today, we are going to explore this now digital exclusion that is currently existing, especially in relating to all these emerging technologies and specifically AI. So the whole discussion is going to revolve around that. So I’m going to be asking questions to my panelists. I mean, they’re going to be asking, I mean, responding to different questions that I’ll be asking them. So just to start with a discussion. So I want to start with my on-site speakers. Start with you, Jawan. In your opinion, what is the most pressing challenge of digital exclusion in AI era? And how can we address it.

Speaker 2: Thank you for your question. I think due to the AI divide between Global North and South, without any cooperation between Global South and North, it will be further that it’s not only that Global South would not be able to proceed further and also bring more economic divide and so on. So I was thinking that like more policies on technology transfer, for example, where the Global North would bring some of the infrastructure technology and helping them to understand how to use the AI skill so that there will be more workforce where they can utilize such a technology and develop further into economic prosperity will be very beneficial in this case. And also I was thinking that one of the main challenges in these days in AI is that there are no much discussion between people and also technology companies that are making those technology where when there is any agreement about how the data will be used and so on, usually it’s between the companies and country and there are not much discussion about how do we bring all the policies to the different stakeholder and the public. So I think there should be more discussion how do we involve all the multi-stakeholder including the public. Yeah.

Moderator: Thank you so much, Xiaowan. So I want to move to you, doctor. So in 2030, we were expecting like to be there when it comes to the sustainable development goals, right? And so how do you think AI can help achieve the SDGs while addressing digital exclusion, especially for marginalized group?

Speaker 3: I think it’s important to keep being, I mean, optimistic, but from now till 2030, only six years, I mean, left is quite a challenge, I’m saying so. And especially when it comes to reflect realities in the global south, it’s quite impossible to be honest. But we have to work very deeply in education, for instance, to give the possibility to local communities education skills in digitalization, for instance, or even in different sectors. For example, in the sector of economy, creating jobs, employments, but we have also to invest on how they got infrastructure because you see how the north is working to equip, I mean, very importantly, academic institutions and provide the needed, materials to younger generations to be educated, I mean, in the context of the current AI and all these innovations, I mean, aspects. So in countries like Chad, for instance, where I come from, we need to invest more into public aspect of development, for instance, contribute to create awareness, contribute also to regulate the use of digital devices. But also to protect, I mean, internet users, because it’s important to create a safe place where internet users can feel free to get to be educated or to work online or to be consulted by any doctor or any institutions over the world. It’s important, I mean, to create the same space when it comes to access. how do you get to be educated. All right, so I wanna pick up from the same point that you just said, you talk about employment. And so now I wanna understand, now we have all these rural communities, what do you think are the key challenges? Let’s say taking example of Chad, what do you think are the key examples? I mean, are key challenges in ensuring that the AI tools reach even the rural communities that in Chad? Like, what do you think are the existing challenges? I have the honor to discuss with Honorable Emma Tsoefel, for instance, the ICT Minister of Namibia, who is here. I am very happy to meet her back again. We discussed about this issue several times and it is a challenge because this one, I mean, concern more our local long years because we have communities that only speak the local languages while internationally we use devices or technology into other languages internationally recognize it. So first of all, we need to create in term of digital literacy content using our local languages for, I mean, to allow or to help communities to understand what the message is. For instance, I cannot explain to someone from my community what artificial intelligence means. It could be very difficult to get a word that I can use it to explain what even technology means. But these people, despite the lack of education, they have today a smartphone, tablet, they use WhatsApp, they use Facebook, for instance, they get to be online without knowing anything. So it’s important to develop our. local languages and via this we can create content that our languages are in the top of the program and create capacity building program for instance even to those who have never been to school, the seniors, I mean people of the third age and younger generation, this one is the most important things because our governments and private sectors have to invest more into education by providing I mean the materials needed, equip them and then promote a very sustainable capacity building program even to lecturers and students who are in research program because even these people are disconnected today from the reality. So how can we imagine it will be possible to teach those people who have never been to school if we have not access to information, education as a high level priority?

Moderator: Wow, thank you so much. I think I can really relate to that. I like the point about multilingualism in AI. I come from Tanzania for the record and in my country we have almost 121 tribes and each tribe have their own languages and our national language is Swahili for example. Among all those 121 we still have people who don’t even know the national language so they don’t speak Swahili. So at some point if we would say someday we come with an AI tools that are in Swahili for example they wouldn’t understand because they don’t even speak Swahili so we have been having challenges of making like reaching the rural communities having access to all this digital knowledge, this AI knowledge and all that. So I think there’s really a big work that we have to do for countries like mine. I want to come to you, Rashad. So you have been having technical experience and all that. So I want you to talk about how can we ensure AI and other technologies can be designed well to ensure that there is inclusivity and accessibility, especially for undeserved communities.

Bendjedid Rachad Sanoussi: Okay. Thank you so much, Miriam, for the question. I also like the point of making it inclusive because in Africa, we face a lot of issues when it comes to technology, and we need to have access to Internet, and many people don’t have access to Internet. Like when we are talking about also access, we have the lack of affordable and reliable Internet infrastructure. So in my country in Benin, many people don’t have access to Internet. Even those who have access to Internet, they just use it for social media. So to address that, maybe we can set some community network or maybe some low-cost satellite technologies that can help everybody to have access to Internet. So we need a lot of collaboration to fix that. So to come to your question now about how we can make AI more inclusive and accessible for everyone, firstly, we need to have access to Internet, so access to the technology. It’s why I was talking about making more infrastructure so that everybody can access to Internet. So to ensure that inclusivity and accessibility really fit in our life today, AI and emerging technologies. we need to design them with a human-centered approach, like we need to put the human at the middle so that this technology can respect human rights. This means that we need to involve local communities and also users to design the system so that they can know how it works, and they can also make contribution to design the application and also the technologies as well. I can take one example. If we want to design an AI tool, we can support like use AI to translate our local language. We need to take in account the diversity of African culture and also the socioeconomic challenge. We are talking about Tanzania. We have a lot of tribe, same in Benin, same in Chad. We need to take those in consideration when we are designing some AI tool as well. Another aspect is more about how we can make AI to use less energy so that it can be more green. When we are talking about AI, sometimes it’s about a lot of algorithm and also a lot of energy use. Tool like AI model need to be optimized for mobile device so that we can respect the environment and those AI tool will be green as well. Also, another thing we can do is to promote open-source AI platform so that everybody can have access to this platform, and know how it works, and also can contribute. By doing that, communities, especially youth, innovators, and startups can co-develop. or develop a cost-effective solution for specific issues we are facing in Africa and other parts of the world as well. So that will be my contribution. Thank you. Thank you.

Moderator: Thank you so much, Rashad. I really liked the point when you talked about the human-centred approach when we are developing all these AI systems, because assuming, if you assume the needs of someone without clearly understanding the needs that they really have, and then you develop solutions for them, it may not meet their needs, because you have created something that may not be useful for them. So it’s really important that we engage all these communities. Now we have talked about all these marginalised groups. Have them in the room, understand their needs, like, what do you want? How can we assist to ensure that you guys are not left behind in all these discussions and all that? So we have talked about all these challenges. Now I want to, for you, Maxwell, now I want you to talk about the success stories. Can you share success stories where youth-led initiatives have bridged digital gaps and improved inclusion?

Speaker 4: All right, thank you very much. I think this is, I’ll just share my own success stories on some of the projects that I have led in that. And thank you to the wonderful three speakers that I’ve spoken. And I think I’m also so much educated right now. Yeah, so when it comes to the success stories, I think it’s first important for us to understand the problems that exist, which is the digital divide in many, most of the communities that I have had the opportunity to work with. Firstly, there’s this project that I started with a colleague, and it is called K-Weeks for Schools. And I realized that most of the senior high schools in Ghana, for example, had computers. and their computers were not functional. And those that were also functional were just left abandoned. There were no softwares, there were no applications that students could engage. And so we came up with this project called K-Works for Schools, where first we go into the institutions and then we train teachers to understand the concept of digital literacy. We take them through just about an hour of digital literacy, how they can understand it, how they can also look at all the parameters of digital literacy. And as teachers, don’t forget that they are custodians of knowledge. And so once the teachers appreciate digital literacy, it becomes easy to transfer in their classroom. So then after we then decided to do the installation of K-Works. K-Works is actually an offline educational resource that enables students to have access to contents like Wikipedia, TED, and all of that. So we also took an advantage of also training the students. So because the students were very many, we decided to sample the class reps so that they will also serve as ambassadors or digital ambassadors to their other colleagues. And we train them on how to use the digital literacy, the K-Works, and also had digital citizenship masterclass for them. Now we realized that based on just one that we started, a lot of schools started reaching out to myself and my colleague to expand this project. So we decided to do it in addition for other senior high schools. And it was gaining the traction and I’ll share some materials later. Then we decided that no, the senior high schools cannot always only be the ones that benefit. And we decided to take it back to the basic schools where we train them on digital literacy. Some of the schools that we actually went, they didn’t even have computers, but we also understand. that once people have the understanding, it shifts their mind in thinking along that tangent. So that was also very important and a very good success stories. As I tell you, K-Wigs has moved from Ghana to African region and now we are even training ambassadors in various countries to also do some of this project as well. So that is just one of the projects that we did in bridging the digital gap as well. And also there is also a project called The Life Project that we worked on with Paradigm Initiative. And that was to also target atypical people, people that ordinarily do not have access to digital literacy or digital skills where we’re able to mobilize people who have left school, people who are no more in school to train them to have some of these skills. And I think it was very, very, very successful project because now some of them are into graphic design and some of them are into a lot of areas when it comes to digital device. So because of time, these are the two fundamental or the two key success stories that I want to share with us all in terms of bridging the digital divide.

Moderator: Wow, thank you so much. Congratulations for the great work that you’re doing. Now, I wanna open this few minutes to the floor. If you have, can you please share if you have in your specific countries, if you have policies that are helping regulate AIs in your AI in your countries, is there something like that that anyone on the floor that would like to share? Anyone from the floor sharing from your respective countries, any policies that ensure that there’s equitable access in AI, or maybe we can start, do you even have any policies that you would like to share? Okay, thank you so much for the great work that you’re doing. I’m gonna open it up to the floor. have policies for AI in your countries? No. Okay. So there is a great work we need to do. So maybe I can come back to you, Jawan, now. Now we have seen here, let’s say from the floor, my country, we don’t, they don’t. So how can we ensure like international collaboration and help in ensuring that we have proper regulations of like AI in our respective countries and all that. Yeah.

Speaker 2: Thank you. Yeah. In my university in Germany, even like, while it’s the first university that actually had the email server in Germany, however, they also doesn’t have any regulations about what is acceptable to use AI or not. For example, in research or your studies or tests. So I always thought like, why? Like, we are always talking about like, we should have all the adopter in AI. We should like make them aware of it as so on. But what are the, I don’t know, the universities and like international organizations doing for them to know what is right and wrong? I think in this case, the international cooperation or I would say the international alliance comes in hand where there should be a universal guideline and regulations about how the people are knowing like what’s okay and not to use AI, for example. For example, I think yesterday in the youth session, they were talking about AI and education. So that while we talk about marginalized people who doesn’t know how to use AI, however, are we ready for them to actually use AI in universities or in companies where like just providing the AI is. not the end of everything. However, I think it begins with all the problem about the privacy and everything where we do not have actual like internet governance ready for them to include everyone in the end user having a right to say about how they use the data or not. So I think in that case, international cooperation should really focus on first thinking of how do we integrate all the end user and stakeholders in this governance. And second of all, having the proper regulations have a universal guideline about how we deal with such a problem.

Moderator: Great. Thank you so much. I’m coming back to you, doctor. You’ve done some research on AI, right? I want you to help me share examples where AI was successfully implemented to solve this, the digital exclusion. Is there any instance maybe you can help us share?

Speaker 3: Well, very largely when it comes to achieve this, for instance, using or benefiting from the use of AI, for instance, in Chad, we have drone tech, which is an initiative that provides support to rural communities when it comes to, I mean, assist them in social, I mean, how do you call, issue, for instance, or when there is a social crisis, for instance, we we intervene to bring, I mean, resources or materials, I mean, to help the community. So for instance, during the crisis between Sudan and, yeah, in Sudan, for instance, we have refugees that were in the border of Chad. and we’re seeking for help. So we use it, I mean, those drones to bring, I mean, support today to the refugees, but also in terms of education, we have initiatives that help, I mean, children to understand better alphabet and vocabulary or even talk to people, for instance, in different languages using these tools. So it’s important to invest in this issue because we have so many challenges to address. And for that, we need a multi-stakeholder collaboration because none of the stakeholder alone can act and, I mean, address these challenges only if we contribute together as one in a strong multi-stakeholder, I mean, body. So I call the, or even urge, I mean, the UN and its partner and our governments, I mean, to take this one into serious, I mean, engagement, especially in countries of the South. It is essential, I mean, to align ourselves with today’s realities when it comes to also tackle the modernization challenges, we are a little bit far and we only solve this with digital inclusion, for instance, and digitalization of the society because I’m seeing the worry in 10, 20 years, perhaps we’ll be having perhaps AI tools that will be able to give 100 or let’s say 95% of consultation, let’s say medical consultation and even provide like a prescription while in countries of the South, developing countries. If you explain this to someone. it could be very strained. So we need to work a little bit hard to align ourselves with the realities in advance.

Moderator: Thank you so much. Thank you so much, Dr. Rashad. Now we have talked about all these challenges. We have seen we don’t have proper regulations when it comes to AI within our specific countries now. But we have also seen the divide that is currently existing between the rural communities and the urban communities. So what steps do you think we can take to ensure that we are bridging the digital divide between the rural and the urban

Bendjedid Rachad Sanoussi: areas? Okay, thank you for the question. I think we have a lot to do. We have many issues when it comes to inclusivity. And from a technical perspective, I think we can maybe first explain the infrastructure because we need to have infrastructure first so that we can have access to the internet and those technologies. So to do that, maybe we can also do community work. You know, in Africa, many countries do not have access to 5G yet. So we can also, by expanding the infrastructure, working on 5G technology or develop some satellite-based internet. You know, Starlink is growing in Africa and it’s sometimes more affordable for some people in a remote area. So we can leverage on public-private partnership to do that as well. After expanding our infrastructure, we can also promote energy access. Like, you know, in some rural communities we don’t have available electricity and when we talk about Internet and connectivity, it’s more about access to electricity. If you don’t have electricity, you cannot have Internet. So it’s really crucial to have access to energy so that we can promote digital inclusion and using also renewable energy like solar and wind to do that. Another way it may be to have affordable device and solution because the issues of money is also crucial. So we need affordable device so that people can buy this device and those device also can, should be energy efficient device so that it will not use a lot of energy. And we can also do a lot of local content like localize the content and service. Like, you know, we have issues about language barrier. So the solution should be more localized and we can develop a lot of content when it comes to our local language on those technology as well. And we can also leverage on our communities to do that. Like, if you want to do the solution, we need to do it with our community so that we can empower the local youth and also encourage the entrepreneur to do a lot. I see in the room we have many entrepreneur here. So it’s really important as an entrepreneur to build the solution that we really need to our communities with them. Thank you.

Moderator: Thanks so much Rashad. And since we are out of time, our time is almost over now, I want to go last to you, Maxwell. So what do you think? What role do you think young people can play in shaping AI policies and tools to ensure that they address the needs of diverse populations?

Speaker 4: Yeah, so I think that this is a very important question. I think first, we need to be very inclusive or in the design approach of all these policies around AI should be very inclusive, where young people do not only participate, but are involved in crafting some of these policies as well. And I think that for young people to be able to also be included when it comes to capacity building and capacity enhancement, how to even understand and leverage on the languages of AI as well, you don’t necessarily need to be a programmer or a deep developer or hardcore developer to be able to understand the parameters within which the artificial intelligence or AI works. And you would agree, or we would all agree that AI has come to stay. And so in order for young people not to be, maybe I’ll use this term as AI immigrants, they should be able to understand the parameters, it should be also included in mainstream education for young people to understand. When we talk about AI, then young people cannot think critically around some of the use of AI as well. So I think the inclusive approach is very important in crafting and shaping this in order to allow young people to be involved. And even at the high level discourse or at the high level, plenaries and conversations around this, we need to also get the input of young people in shaping the policies as well. Yeah, so these are the thoughts that come in mind. And also there should be programs that would be structured in a way to also make sure that the capacity of young people are built to understand AI.

Moderator: Thank you, Dr. Maxwell. So now we have hard panelists. I wanna open the floor again to my participants. Is there anything, any question that you have to the speakers, any contributions, something that you want to add from this ongoing discussion? Yes, please.

Audience: You mentioned a lot about young people, but I’m wondering as opposed to the young population, what can we do to the elder to make sure that they have equal access to AI or internet? Because a lot of older people nowadays, they have ingrained ideology opposing AI, and they also due to their physical or cognitive issues, they have trouble getting access to the internet. And some of them are also lacking enough money. So how do you think we can address that issue? Who wants to respond?

Speaker 3: Thank you for the question. Well, for those who are professional, I think it would be a little bit easy to support the process, because we can just, I mean, call companies to have like a capacity enhancement program. to these professionals, any time there is a certification or any programs comes out, we can organize such capacity enhancement program to reinforce their capacities in certain subject. But for those who are unprofessional or never work or don’t have a quality education, we can work on digital literacy by creating, as we say, content using local languages to help them understand the use of these tools or, I mean, digital devices or whatever in program that can help them, I mean, in their age, in case. Yeah, so that’s why it’s important to keep working very hard to initiate a program that support younger generation to be very well-educated and then align itself with our today’s realities or the 21st and mid-century realities in general.

Moderator: Anyone who wants to add? Okay, so I think, but I think something that I wanted to add on, I think it’s really important to understand. I’ll say, let’s say from my country per se, we can’t go directly and start introducing AI because there’s still a very large group of people that are not digitally literate. So we have people that can’t even turn on computers. They can’t even do a simple Google search and stuff. So I think at some point to help, let’s say, with the elders in that kind of a situation, we might need to start with the basics. So we can’t go to AI if they don’t even get the basics of computers. They need to understand this, the basic stuff, and then moving forward, that will at least be easier. I guess. Yeah. Any other questions from the floor? Any contribution? Yes.

Audience: So thank you very much for these interesting contributions. My name is Florent. I’m a professor of law at the University of Zurich. So I come from a very privileged country, but I think it might be interesting also for you to know that we also have minority languages. We have a language in Switzerland which is spoken by less than 20,000 people. It’s even split up in different dialects. They have difficulties understanding each other. So developing AI tools for these language groups is a huge challenge, just because there’s a lack of data and lack of users as well. But maybe given that there are some developed countries that face similar problems, there is a chance to cooperate amongst continents on these, I think, very important issue. Great, insightful contribution. Yeah. Thank you. My name is Ram Mohan, and I’m with a company called Identity Digital and Critical Infrastructure. One of the things that I’ve been quite concerned about with AI and digital inclusion, especially in the area of languages, as the prior intervention was, I think we are now in an era of data poor and data rich languages. And I think if you look forward in the next five to 10 years, AI systems which train on language sets are going to train disproportionately on data rich languages. They’re going to lead data poor languages to the side. And I think if we don’t take action now, we’re going to start having a risk that the data-poor languages will disappear from the digital infrastructure and the people who speak and use those languages are also going to go away. So I’m quite concerned about that. I wonder if you have a perspective on it. Alice, do you have any contribution on that? It’s not a contribution, but perhaps I can add a little bit more to this question and ask our professor from Switzerland because when I was in Lyon during my PhD program, I used to go to Geneva where I’m more connected with certain programs. So do you think, professor, that AI use or let’s say digitalization can be an opportunity to develop minorities, I mean languages, I mean the less spoken, I mean let’s say languages, because if we work on that, we have no choice only to create programs that will develop these languages, add more vocabularies, create programs that is very limited this vocabulary in this language. So how do you think we can work on that?

Bendjedid Rachad Sanoussi: Thank you very much. Tough question. I’m not sure I do have any ideas how to work on that, but I fully agree that I think it’s important groups into this technology. Yes, okay. Because even right now there are risks to be left behind and there’s simply a danger of just these languages getting extinct because maybe as opposed to some in some of the African countries speak German or Italian on top, so they are able to communicate in other languages, which raises of the language getting extinct. To use that language into these technologies might be a means to promote it and give people chances to communicate technologies. So I think it’s super important, but no solution so far, I’m sorry.

Moderator: Okay, and contribution, yes, at the back. Can you please pass the mic at the back? Over there. Thank you for an interesting presentation, I must say. I think one of the aspects really is that I resonate with what you’re all saying.

Audience: My name is Mbongi Nimsimangasori. I’m a postdoctoral researcher with the Johannesburg Institute for Advanced Study in South Africa. So one of the issues is I originally come from Zimbabwe and one of the issues especially that we face in our country is that these, I don’t know if I may say policy contradiction, right? So the question is, should we wait for government to actually introduce policy or should we let the industry to lead, right? I must say that especially within the media sector who already have AI robots, right? And particularly from an institute called the CITE, it’s called the Center for Technology Innovation, right? It’s mainly a new site, right? They’re already developing an LLM for one of the main languages in Zimbabwe which is Isindere language, right? So that’s one of the critical aspects should we actually wait for government because government is actually delaying and for them to pass these policies, to pass these laws takes so much time, right? So, we have these institutes actually developing these specific, going ahead with AI policy and all. So, in Zimbabwe as well, if I may say that one of the resolutions in terms of consultations with the Internews and the Media Institute for Southern Africa was to sort of develop an Afrocentric, you know, AI tools, right? So, it’s one of the critical aspects that perhaps we can take a lead from as well, right? But I must say that we’re still far from actually developing policy, you know, considering the slow pace at which everything is actually going. But this site, as I mentioned, has really taken a very critical role in actually developing this LLM within, and including actually minority languages, especially within that. We had an uproar recently as well where that particular robot that could not even pronounce its Independent names or African names, and a lot of people are actually complaining around that, right? So, I think coming up with this LLM would really probably assist in that as well. So, that’s my main contribution and very interesting work that everyone is actually doing here in AI as well.

Moderator: Thank you very much, everyone, for the contribution. This is really an amazing discussion that’s going on here, and we have less than five minutes. So, allow me to please close the session, and I would like to tell my speakers to please tell me if you are to describe in a single word what would be the critical factor that would help in ensuring a digital inclusion in AI? Like in one word, what would be a factor that you think this is very critical for us to achieve digital inclusion in AI? Yes. Okay. Thank you. education? Chair Wong? Public awareness. My online speakers. Capacity. Capacity. Human-based approach. Human-based approach. Human-based approach. Human-based approach. Human-based approach. Human-based approach. Human-based approach. Thank you, everyone. Thank you very much for attending and joining us this session today, and we’re looking forward to seeing you in other sessions. Thank you. Thank you to the panelists, too.

S

Speaker 2

Speech speed

140 words per minute

Speech length

509 words

Speech time

218 seconds

Need for technology transfer and infrastructure from Global North to South

Explanation

The speaker argues that there is an AI divide between the Global North and South. To address this, technology transfer and infrastructure development from the Global North to the South is necessary.

Evidence

Suggestion of policies on technology transfer to bring infrastructure and AI skills to the Global South

Major Discussion Point

Digital Exclusion and AI Divide

Agreed with

Bendjedid Rachad Sanoussi

Agreed on

Addressing infrastructure and access challenges

Differed with

Speaker 3

Bendjedid Rachad Sanoussi

Differed on

Approach to addressing digital exclusion

Lack of AI regulations in many countries and institutions

Explanation

The speaker points out that many institutions, including universities, lack regulations on acceptable AI use. This creates uncertainty about what is right or wrong in using AI for research or studies.

Evidence

Example of the speaker’s university in Germany lacking regulations on AI use

Major Discussion Point

AI Policy and Regulation

Need for universal guidelines on acceptable AI use

Explanation

The speaker suggests that international cooperation is needed to create universal guidelines and regulations for AI use. This would help people understand what is acceptable in using AI across different contexts.

Major Discussion Point

AI Policy and Regulation

S

Speaker 3

Speech speed

121 words per minute

Speech length

1163 words

Speech time

573 seconds

Importance of developing local language content for digital literacy

Explanation

The speaker emphasizes the need to create digital literacy content in local languages. This is crucial for helping communities understand technology and digital concepts in their native tongues.

Evidence

Example of difficulty in explaining concepts like artificial intelligence in local languages

Major Discussion Point

Digital Exclusion and AI Divide

Agreed with

Speaker 2

Bendjedid Rachad Sanoussi

Speaker 4

Agreed on

Need for inclusive AI development

Differed with

Speaker 2

Bendjedid Rachad Sanoussi

Differed on

Approach to addressing digital exclusion

Importance of multi-stakeholder collaboration on AI governance

Explanation

The speaker stresses the need for collaboration among multiple stakeholders to address AI challenges. No single stakeholder can solve these issues alone, necessitating a united effort.

Evidence

Call for UN, partners, and governments to take serious engagement in addressing AI challenges in the Global South

Major Discussion Point

AI Policy and Regulation

B

Bendjedid Rachad Sanoussi

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Lack of affordable and reliable internet infrastructure in many areas

Explanation

The speaker highlights the lack of affordable and reliable internet infrastructure as a major challenge in many areas, particularly in Africa. This lack of infrastructure hinders access to digital technologies and AI.

Evidence

Example of many people in Benin lacking internet access or only using it for social media

Major Discussion Point

Digital Exclusion and AI Divide

Agreed with

Speaker 2

Agreed on

Addressing infrastructure and access challenges

Differed with

Speaker 2

Speaker 3

Differed on

Approach to addressing digital exclusion

Need for human-centered approach in AI design

Explanation

The speaker advocates for a human-centered approach in AI design to ensure inclusivity and accessibility. This approach involves putting humans at the center and respecting human rights in technology development.

Evidence

Suggestion to involve local communities and users in designing AI systems

Major Discussion Point

Inclusive AI Development

Agreed with

Speaker 2

Speaker 3

Speaker 4

Agreed on

Need for inclusive AI development

Promoting open-source AI platforms for accessibility

Explanation

The speaker suggests promoting open-source AI platforms to increase accessibility. This would allow more people to access, understand, and contribute to AI development.

Evidence

Mention of enabling communities, youth, innovators, and startups to develop cost-effective solutions

Major Discussion Point

Inclusive AI Development

Expanding infrastructure and promoting energy access

Explanation

The speaker emphasizes the need to expand infrastructure and promote energy access to bridge the digital divide. Access to electricity is crucial for internet connectivity and digital inclusion.

Evidence

Suggestion to leverage public-private partnerships and promote renewable energy sources like solar and wind

Major Discussion Point

Bridging Rural-Urban Digital Divide

Agreed with

Speaker 2

Agreed on

Addressing infrastructure and access challenges

Providing affordable and energy-efficient devices

Explanation

The speaker highlights the importance of providing affordable and energy-efficient devices to promote digital inclusion. This addresses both financial and energy constraints in accessing digital technologies.

Major Discussion Point

Bridging Rural-Urban Digital Divide

Developing localized content and services

Explanation

The speaker stresses the need to develop localized content and services to address language barriers and make digital technologies more relevant to local communities. This involves creating content in local languages and tailoring services to local needs.

Major Discussion Point

Bridging Rural-Urban Digital Divide

S

Speaker 4

Speech speed

154 words per minute

Speech length

930 words

Speech time

361 seconds

Success of youth-led initiatives like K-Works to bridge digital gaps

Explanation

The speaker shares the success of K-Works, a youth-led initiative that aims to bridge digital gaps in schools. The project involves installing offline educational resources and training teachers and students on digital literacy.

Evidence

Description of K-Works project implementation in Ghana schools and its expansion to other African countries

Major Discussion Point

Digital Exclusion and AI Divide

Including young people in shaping AI policies and tools

Explanation

The speaker emphasizes the importance of including young people in shaping AI policies and tools. This involves not just participation but active involvement in crafting policies and understanding AI concepts.

Evidence

Suggestion to include AI understanding in mainstream education for young people

Major Discussion Point

Inclusive AI Development

Agreed with

Speaker 2

Speaker 3

Bendjedid Rachad Sanoussi

Agreed on

Need for inclusive AI development

M

Moderator

Speech speed

148 words per minute

Speech length

1404 words

Speech time

567 seconds

Starting with basic digital literacy before introducing AI

Explanation

The moderator suggests that in some countries, it’s necessary to start with basic digital literacy before introducing AI. This is because many people lack even basic computer skills.

Evidence

Example from the moderator’s country where many people can’t perform simple computer tasks

Major Discussion Point

Bridging Rural-Urban Digital Divide

A

Audience

Speech speed

138 words per minute

Speech length

864 words

Speech time

375 seconds

Challenge of developing AI for languages with small speaker populations

Explanation

An audience member highlights the challenge of developing AI tools for languages with small speaker populations. This is due to the lack of data and users for these languages.

Evidence

Example of a language in Switzerland spoken by less than 20,000 people

Major Discussion Point

AI and Minority Languages

Risk of data-poor languages disappearing from digital infrastructure

Explanation

An audience member expresses concern about the risk of data-poor languages disappearing from digital infrastructure. AI systems are likely to train disproportionately on data-rich languages, potentially marginalizing data-poor languages.

Major Discussion Point

AI and Minority Languages

Potential for AI to help preserve and develop minority languages

Explanation

An audience member suggests that AI and digitalization could be an opportunity to develop and preserve minority languages. This could involve creating programs to develop these languages and add more vocabularies.

Major Discussion Point

AI and Minority Languages

Development of language models for local African languages

Explanation

An audience member shares an example of AI development for local African languages. This involves the creation of a language model for a major language in Zimbabwe by a local technology institute.

Evidence

Example of the Center for Technology Innovation developing an LLM for the Isindere language in Zimbabwe

Major Discussion Point

AI and Minority Languages

Question of whether to wait for government policy or let industry lead

Explanation

An audience member raises the question of whether to wait for government to introduce AI policies or let the industry lead. This highlights the tension between slow policy-making processes and rapid technological development.

Evidence

Example from Zimbabwe where institutes are developing AI tools while government policy lags behind

Major Discussion Point

AI Policy and Regulation

Agreements

Agreement Points

Need for inclusive AI development

Speaker 2

Speaker 3

Bendjedid Rachad Sanoussi

Speaker 4

Need for technology transfer and infrastructure from Global North to South

Importance of developing local language content for digital literacy

Need for human-centered approach in AI design

Including young people in shaping AI policies and tools

The speakers agree on the importance of inclusive AI development, emphasizing technology transfer, local language content, human-centered design, and youth involvement.

Addressing infrastructure and access challenges

Speaker 2

Bendjedid Rachad Sanoussi

Need for technology transfer and infrastructure from Global North to South

Lack of affordable and reliable internet infrastructure in many areas

Expanding infrastructure and promoting energy access

The speakers agree on the need to address infrastructure and access challenges to bridge the digital divide and promote AI inclusion.

Similar Viewpoints

Both speakers emphasize the importance of creating content and services in local languages to make digital technologies more accessible and relevant to local communities.

Speaker 3

Bendjedid Rachad Sanoussi

Importance of developing local language content for digital literacy

Developing localized content and services

Both speakers advocate for inclusive policy-making processes in AI, emphasizing the need for universal guidelines and youth involvement in shaping AI policies.

Speaker 2

Speaker 4

Need for universal guidelines on acceptable AI use

Including young people in shaping AI policies and tools

Unexpected Consensus

Potential of AI to preserve minority languages

Audience

Speaker 3

Potential for AI to help preserve and develop minority languages

Importance of developing local language content for digital literacy

There was an unexpected consensus between an audience member and Speaker 3 on the potential of AI to help preserve and develop minority languages, despite the challenges posed by data scarcity for these languages.

Overall Assessment

Summary

The main areas of agreement include the need for inclusive AI development, addressing infrastructure and access challenges, developing localized content, and involving diverse stakeholders in AI policy-making.

Consensus level

There is a moderate level of consensus among the speakers on the key challenges and potential solutions for digital inclusion in AI. This consensus suggests a shared understanding of the complexities involved in bridging the digital divide and the need for multi-faceted approaches to address these issues. However, there are still variations in the specific solutions proposed, indicating the need for further dialogue and collaboration to develop comprehensive strategies for AI inclusion.

Differences

Different Viewpoints

Approach to addressing digital exclusion

Speaker 2

Speaker 3

Bendjedid Rachad Sanoussi

Need for technology transfer and infrastructure from Global North to South

Importance of developing local language content for digital literacy

Lack of affordable and reliable internet infrastructure in many areas

Speakers emphasized different primary factors for addressing digital exclusion: technology transfer, local language content development, and infrastructure improvement.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement centered around prioritizing different approaches to address digital exclusion and AI governance.

difference_level

The level of disagreement among speakers was relatively low. Most speakers presented complementary rather than conflicting viewpoints, focusing on different aspects of the same overarching issues. This suggests a multifaceted approach may be necessary to address digital exclusion and ensure inclusive AI development.

Partial Agreements

Partial Agreements

All speakers agreed on the need for inclusive AI governance, but emphasized different aspects: universal guidelines, multi-stakeholder collaboration, human-centered design, and youth involvement.

Speaker 2

Speaker 3

Bendjedid Rachad Sanoussi

Speaker 4

Need for universal guidelines on acceptable AI use

Importance of multi-stakeholder collaboration on AI governance

Need for human-centered approach in AI design

Including young people in shaping AI policies and tools

Similar Viewpoints

Both speakers emphasize the importance of creating content and services in local languages to make digital technologies more accessible and relevant to local communities.

Speaker 3

Bendjedid Rachad Sanoussi

Importance of developing local language content for digital literacy

Developing localized content and services

Both speakers advocate for inclusive policy-making processes in AI, emphasizing the need for universal guidelines and youth involvement in shaping AI policies.

Speaker 2

Speaker 4

Need for universal guidelines on acceptable AI use

Including young people in shaping AI policies and tools

Takeaways

Key Takeaways

There is a significant digital divide and AI divide between the Global North and South that needs to be addressed

Lack of infrastructure, affordable internet access, and digital literacy are major barriers to AI inclusion in developing countries

Developing AI tools and content in local languages is crucial for digital inclusion

A human-centered, inclusive approach is needed in AI development to ensure it meets the needs of diverse populations

Youth and local communities should be involved in shaping AI policies and tools

There is a lack of AI regulations and policies in many countries, especially in the Global South

Resolutions and Action Items

Promote technology transfer and infrastructure development from Global North to South

Develop more localized AI content and tools in indigenous languages

Implement digital literacy programs, especially in rural areas

Include youth and marginalized groups in AI policy development

Expand internet infrastructure and promote affordable access

Create universal guidelines for acceptable AI use

Unresolved Issues

How to effectively regulate AI across different countries and contexts

How to preserve and develop AI for minority languages with small speaker populations

How to ensure older populations are not left behind in AI adoption

Whether to wait for government policy on AI or let industry take the lead

How to address the growing divide between data-rich and data-poor languages in AI development

Suggested Compromises

Balancing rapid AI development with careful consideration of inclusivity and ethics

Collaborating across developed and developing countries to address shared challenges like minority language preservation

Starting with basic digital literacy before introducing advanced AI concepts in some communities

Thought Provoking Comments

We need to create in term of digital literacy content using our local languages for, I mean, to allow or to help communities to understand what the message is. For instance, I cannot explain to someone from my community what artificial intelligence means.

speaker

Speaker 3

reason

This highlights the critical challenge of language barriers in AI adoption, especially in diverse linguistic regions. It emphasizes the need for localized content to make AI accessible.

impact

This comment shifted the discussion towards the importance of multilingualism in AI development and sparked further conversation about language diversity challenges in different countries.

To ensure that inclusivity and accessibility really fit in our life today, AI and emerging technologies, we need to design them with a human-centered approach, like we need to put the human at the middle so that this technology can respect human rights.

speaker

Bendjedid Rachad Sanoussi

reason

This comment emphasizes the critical importance of human-centric design in AI development, ensuring technology serves human needs and respects rights.

impact

It refocused the discussion on ethical considerations in AI development and the need to involve local communities in the design process.

I think we are now in an era of data poor and data rich languages. And I think if you look forward in the next five to 10 years, AI systems which train on language sets are going to train disproportionately on data rich languages. They’re going to lead data poor languages to the side.

speaker

Ram Mohan

reason

This comment introduces a crucial perspective on the long-term implications of AI development on language diversity and preservation.

impact

It deepened the conversation by highlighting a potential future challenge in AI and language, prompting further discussion on how to address this issue.

Should we wait for government to actually introduce policy or should we let the industry to lead, right?

speaker

Mbongi Nimsimangasori

reason

This question raises an important point about the balance between government regulation and industry innovation in AI development.

impact

It introduced a new dimension to the discussion about policy development and implementation, highlighting the tension between waiting for government action and allowing industry to lead innovation.

Overall Assessment

These key comments shaped the discussion by highlighting critical challenges in AI adoption and development, particularly in diverse linguistic and cultural contexts. They broadened the conversation from technical aspects to include ethical considerations, policy challenges, and long-term implications for language and cultural preservation. The discussion evolved from identifying problems to exploring potential solutions and considering the roles of different stakeholders in addressing these challenges.

Follow-up Questions

How can we develop AI tools and content in local languages to improve digital literacy and inclusion?

speaker

Speaker 3 (Dr. from Chad)

explanation

This is crucial for ensuring AI technologies are accessible and useful to diverse populations, especially in rural areas and developing countries.

What policies and regulations are needed to govern AI use and development across different countries?

speaker

Moderator and Speaker 2 (Jawan)

explanation

There is a lack of clear policies in many countries, highlighting the need for international collaboration on AI governance.

How can we ensure multi-stakeholder collaboration, including local communities and end-users, in AI development and governance?

speaker

Speaker 2 (Jawan) and Bendjedid Rachad Sanoussi

explanation

This is important for creating AI systems that truly meet the needs of diverse populations and respect human rights.

What strategies can be employed to bridge the digital divide between rural and urban areas?

speaker

Moderator

explanation

This is critical for ensuring equitable access to AI and digital technologies across different geographic regions.

How can we address the needs of older populations in accessing and using AI and digital technologies?

speaker

Audience member

explanation

This highlights the importance of considering all age groups in digital inclusion efforts, not just youth.

How can we prevent the disappearance of data-poor languages from digital infrastructure as AI systems advance?

speaker

Audience member (Ram Mohan)

explanation

This is crucial for preserving linguistic diversity and ensuring AI doesn’t exacerbate existing language inequalities.

Should we wait for government policies or allow industry to lead in AI development and implementation?

speaker

Audience member (Mbongi Nimsimangasori)

explanation

This highlights the tension between policy development and technological progress, particularly in countries where government action may be slow.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.